Whale 🐳 | Sunset ☀️ | Butterfly 🦋 |
---|---|---|
![]() |
![]() |
![]() |
Image segmentation demonstrated by EyeQ. 👆
EyeQ seeks to explore the current capabilities and future potential of quantum computing in image segmentation. Our goal in the 2023 NQN Hackathon was to apply a quantum algorithm to solve a real-world problem in sub-classical time complexity. Prior image segmentation algorithms have traditionally been performed with the minimum-cut algorithm. However, critics of this approach have pointed out issues regarding partition precision—where some resulting cuts may be trivially small. A newer technique for image segmentation is performed via a Max-Cut algorithm, which is an NP-Hard problem. EyeQ seeks to use quantum computing to achieve a speedup on this NP-Hard problem.
Image segmentation has a wide array of important applications in the natural sciences, applied mathematics, and machine learning. For example, researchers might use satellite photos of lakes or bodies of water to quantify drought conditions—to do so, an image can be segmented into water and land components, and compared over time, as shown below:
Original Image | Water Extracted from Image |
---|---|
![]() |
Scaled Image | Water Area Extracted by EyeQ |
---|---|
![]() |
![]() |
EyeQ's implementation of image segmentation is limited in resolution; as each pixel is represented by one qubit and Aria 1 has 23 qubits, the largest images segmented were 4x4. Nonetheless, this was a successful quantum computing application proof-of-concept and a valuable learning experience.
Download this notebook and upload it to a Microsoft Azure Quantum Workspace. Alternatively, you may use a local Azure Python setup if you have one configured.
The notebook has a dependency on qiskit[optimization]
and wget
. If this set of packages is not available in your working environment, run the 2nd and 3rd code cell (under Imports) or run pip install qiskit[optimization] wget
in your environment. Restart your notebook's kernel if necessary.
At the top of the notebook is a cell with configuration options.
### UPDATE WITH YOUR RESOURCE ID
azure_resource_id = ""
w_pixels = 3
url = 'https://img.freepik.com/premium-vector/simple-killer-whale-pixel-art-style_475147-1552.jpg?w=1380'
backend_target = "ionq.simulator" # "ionq.qpu.aria-1" or "ionq.simulator"
The only configuration REQUIRED is to put your resource URL in azure_resource_id
.
The defaults for the other options already work. You may replace the segmentation resolution w_pixels
(number of pixels along an axis) and the quantum backend backend_target
. We've provided the URL to a sample image for convenience.
Once the packages are installed, you are ready to run the notebook. At this point, you may run all cells and use an IonQ quantum computer to segment an image!
Preprocessing: Images are condensed to the desired pixel dimensions—limited principally by the number of available qubits—using bilinear image resizing. The image is then converted into an undirected, weighted, and fully connected graph. Each pixel is mapped to a vertex and its edge weight to each other vertex is equal to their difference in color, calculated as the sum of the absolute value of differences in RGB channels.
Quantum Algorithm: The graph is converted to an Ising Hamiltonian which a Variational Quantum Eigensolver (VQE) then solves for the maximum-cut using a qubit for each vertex in the graph. This maximum-cut splits the vertices into two sets that have the greatest edge weights between the sets.
Postprocessing: Based on the set assignments in the output, the pixels of the downsized image are grouped and the photo is split into two sets of pixels.
Here is EyeQ segmenting the image into 4x4, 3x3, and 2x2 segmentation resolutions. With more qubits, EyeQ can scale to higher segmentation resolutions.
Original Image:
8192 shots, 10 iterations
Resolution | Whale | Background | Histogram (probability vs cut solution) |
---|---|---|---|
4x4 | ![]() |
![]() |
![]() |
3x3 | ![]() |
![]() |
![]() |
2x2 | ![]() |
![]() |
![]() |
Notes and analysis:
- If the resized image is fairly symmetrical, the inverses of the solutions will often produce results of equal probability. For example, the 2x2 graph has nearly equal representation for the solutions 0100 and 1011. Both produce the same cut, just with segments labeled oppositely.
- 4x4: the 2^16 histogram outputs were filtered to show only the significant cuts to improve readability
8192 shots, 10 iterations, 3x3 segmentation
Whale | Background | Histogram |
---|---|---|
![]() |
![]() |
![]() |
Due to time and cost constraints, EyeQ was restricted to 10 iterations which has effects on accuracy. There is also a lot more noise in the solution output space as compared to the noiseless simulator's computation.
The noise in combination with the lower iteration count contributes to the error in the output. For example, here is the classical solution compared to the QPU's solution:
Classical max cut | QPU max cut |
---|---|
![]() |
![]() |
While there is some overlap in the QPU's max cut solution compared to the classical solution, they are not 100% in agreement.
In this project, we compared pixels by RGB channel values, attempting to quantify how similar or how different in color any two pixels are. However, there are other ways to digitally represent color, like HSV, that may yield more accurate results. For example, two colors may be visually very different yet have one or two similar RGB channels, and therefore be grouped; with HSV on the other hand, one could simply compare the Hue channels.
For this to be a practical method of performing image segmentation—or approach for solving similar graph-theoretic problems—access to more qubits is necessary. In our approach, each pixel (each node in the graph) corresponds to one qubit, and the number of qubits required therefore scales quadratically with the dimensions of the input image.
Guides, tutorials, and other resources referenced during this project:
We would like to thank Microsoft, IonQ, and the whole NQN consortium for graciously hosting this hackathon. And a huge, huge thank you to the industry mentors for their help, patience, and good humor this weekend—this wouldn't have been possible without you!
- William Galvin (william-galvin)
- Andrew Kuhn (KuhnTycoon)
- Zeynep Toprakbasti (ladybuglady)
- Kenneth Yang (microBob)