Seems like the search is based only on the transcript/dialogue - not an image embedding. Would be super cool to actually use some CLIP/embedding search on these for a more effective fuzzy lookup.
Agreed. If you search for Barney, say, none of the top ten picture him at all and is mostly people speaking to or about him. Even running them through a vision LLM for a list of keywords would yield better results than the subtitles, I suspect.
You’d just run every picture through CLIP, essentially you run an image generator backwards. Instead of text to image like most end users use when using something like stable diffusion (been awhile since I’ve done this), it can do the exact opposite and generate tokens (just words in this case) to describe the input image.
I’d guess famous characters like Bart and Marge and other Simpsons characters would likely be known by the tokenizer so it’d be pretty easy. So then you’d be able to guess.
Feel free to correct me on small details if anyone has this more fresh in their mind but I’m roughly correct here.
- wasn’t just a vacuum cleaner; it was a small computer on wheels
- they didn’t merely create a backdoor; they utilized it
- they hadn’t merely incorporated a remote control feature. They had used it to permanently disable my device
It was until AI started doing it everywhere. When I took technical writing course in college, many years ago, breaking up paragraphs with bullet-point lists was one of the core techniques taught for writing clear, effective documentation.
I don’t think there’s any single way to be sure, but it sure reads like ChatGPT to me. Which I’m not sure is such a bad thing—I presume the author used an AI to help them write the story, but the story is real. Or maybe they edited it themselves to make it sound more generic. Whatever the reason, the style takes away from my reading experience. It’s a blog post, I expect some personality!
Oh boy. Is this not essentially a neuralese CoT? They’re explicitly labelling z/z_L as a reasoning embedding that persists/mutates through the recursive process, used to refine the output embedding z_H/y. Is this not literally a neuralese CoT/reasoning chain? Yikes!
It's definitely an "on the shoulders of giants standing on the shoulders of giants" thing. Insane breakthrough technologies on top of other insane breakthroughs. Firing lasers at microscopic molten drops of metal in a controlled enough manner to get massively consistent results like what??
It’s a mind blowing achievement, nothing below sorcery if you think about it.
ASML machines are hitting tin droplets with 25kW laser 50,000 times a second to turn them into plasma to create the necessary extreme ultraviolet light, and despite generating 500W of EUV, only a small fraction can reach the wafer, due to loses along the way. I believe it was like 10%.
One thing I am curious about - how many generations of process shrink is one of these machines good for? They talk about regular EUV and then High-NA EUV for finer processes, but presumably each machine works for multiple generations of process shrink? If so, what needs to be adjusted to move to a finer generation of lithography and how is it done? Does ASML come in and upgrade the machine for the next process generation, or does it come out of the box already able to deliver to a resolution a few steps beyond the current state of the art?
If you’re interested in this stuff Asianometry has lots of great videos. They’re not all on semiconductors, but he’s done a lot on this history, developments, and what’s going on in that world.
Maybe the high water usage is at some other stage? Or intermediate preceding stages? I'd love to understand more end-to-end, as surely it isn't as easy as popping a wafer in a semi-truck trailer sized lithography machine.
Check out the Branch Education channel, they have a series of videos that explain how the underlying transistors are made in 3d space with multilayer exposures etc.
One thing to understand is that you’re seeing an accumulation of over 50 years of incredible engineering and cutting edge science, these things were invented incrementally.
Lithography is one of many steps, but probably the most important one. You use it to expose a photoresist to create a mask for further processing. After exposing the photoresist you need to develop it, remove either the exposed or unexposed photoresist. The remaining photoresist then is the mask and you either etch or dope the surface that is not covered by the mask or you deposit material on top. And then you need to remove the mask and start all over again for the next layer. The high water usage comes from repeatedly needing to clean the surface to remove chemicals and photoresist.
I think this clockwork-in-a-vacuum was preceded by eidophors: a projector with a spinning disc of oil that has an image drawn by an electron gun, that is then illuminated by an arc lamp.
I’ll give it a shot. Zippers/velcro are critical for most modern military gear. Elevators are used to increase the storage capacity for warplanes on aircraft carriers. Thermometers (well, any temperature sensing device) are important for many weapons systems, guidance computers, etc. Wind turbines… hmm, the infamous Stuka siren was basically a wind turbine welded to the side of the plane!
From my experience every hotel integrated charger is barely 5V 0.5A, much less PD or anything else. Parent comment was talking about the integrated usb chargers in rooms, if I’m understanding correctly (hence the need to still bring their own)
reply