>The computer is (said to be) live and in use by companies, so cryogenic cooling keeps the system temperature as close to absolute zero as possible to conserve that precious quantum state.
Before LLMs/AI became the obvious "next big thing in computing" I remember coming across a fair number of opportunistic devs on LinkedIn trying to promote themselves as "quantum software engineers", and even a just a year or two ago I would see "quantum machine learning" on people's profiles. I remember thinking maybe I had missed something and seeing how many qubits we could even have... needless to say it was (and still is) not enough for any meaningful ML work quite yet.
If you search you can still find some, and, as someone who has spent more than a decade doing actual machine learning, I find the audacity to claim that you're doing any kind of serious software engineering, let along proper ML work, on a quantum computer to be almost impressively audacious.
As a fellow currently dipping my toes into quantum machine learning, I think that you think we're saying "machine learning on quantum hardware", when what we actually mean is "machine learning for quantum computing on classical hardware". That is, using machine learning on classical computers to try to increase the effectiveness of quantum hardware.
I know a couple of quantum software engineers and these people are in universities writing novel algorithms on whiteboards (and sometimes testing them out in QuPy).
As I understand it Quantum algorithms are very much needed as the hardware improves. The hardware gets shown off, but there aren't a lot of algorithms that can take advantage of it if it worked better. Yet.
As I understand it, the hardware very much lags behind the algorithms. We have plenty of cool algorithms to run on quantum computers, but to be practically interesting they all need more qubits or better coherence times than is available today.
The hardware very much lags behind the algorithmic advances, much of the current push for new features in quantum hardware (midcircuit measurement/feedforward, phonon mode coupling, etc) often comes from theorist colleagues pestering experimentalists about whether their hardware can run their algorithms yet.
In fact, this is analogous to the original motivation for the development of classical supercomputers, physicists wanted to run expensive non-perturbation Lattice QCD calculations, so they co-designed some of the earliest supercomputer architectures.
Probably just marketing wank, but I got a chuckle out of "it’s not likely to be something you’ll ever have at home" as if we haven't all heard that before.
This is consistent with IBM's history of putting computers doing customers' work on display. I am aware of the company doing so in New York and Toronto.
They also have one displayed at the Cleveland Clinic main campus _cafeteria_
Imo focusing in “showing off” instead of “providing value” is a bit of a product-smell. Maybe thats just the point tho, IBM seems to prioritize impressing C-suites over actually accomplishing anything
It’s not unheard of in the medical realm. Slightly different but when Intuitive Surgical released their DaVinci robotic surgery platforms, a hospital system I worked with was early on their list. They also set up the demo unit in the cafeteria so you could see surgeons peeling oranges and then stitching them back up or what not.
Back in the early 2000s I worked for Cap Gemini in Birmingham England which had a part of the office that was some sort of partnership with IBM GS(I think IBM did the hardware and cap got the services contacts). They also had a big blinkin lights server setup in the middle of the office for clients to see. As a teenage geek in his first tech job I used to love going to peek at it even though I did tape rotation on the real servers in the basement most days.
It's also consistent with IBM's history of just putting what they considered important computers from history on display in their offices.
When I post doc'd at TJW I know they had a big museum like display in the lobby (But this was years ago, who knows if its still there), with ibm computers from history, but also things like babbage's and the like.
We'd need to know how to build a useful quantum computer, find a useful algorithm to run on it (factoring large primes lacks broad consumer appeal), and use this demand to fund research into a way to reduce manufacturing and running costs to reasonable levels.
Either IBM has cracked optically transparent coatings that cycle from 300 to 1K repeatedly and acrylic that has a metal's thermal conductivity, or it's a sham.
Oh hush, you're not going to nerd snipe me into doing the thermal flux calculations today.
Optical photons don't carry an impossible amount of energy: I've seen liquid helium through a small coated window. The window was there for ion beam purposes, not "entertaining the grad student", and it was a big element in the heat budget!
I think the more convincing argument is that most known applications of quantum computers (sidestepping any hardware practicalities), are for niche problems (in my wheelhouse, quantum simulation), the average person has no (practically advantageous) reason to own a quantum computer.
I suspect that once quantum computers actually scale up so that you can play with them, we'll find all sorts of interesting things to do with them.
However, even now, you can imagine that if quantum computers were small enough, it would be worth it to have it just for the asymptotically fast prime generation with Shor's algorithm. I don't think that's that far fetched. Of course, people wouldn't necessarily need to know they have a quantum computer, but they don't necessarily know the workings of their computers today anyway.