Hacker Newsnew | past | comments | ask | show | jobs | submit | Leptonmaniac's commentslogin

One tiny error on my part when reading the title and it took me several HN comments to realize this is not a website about finding hedges.

Wasn't there a comment on this phenomenon along the lines "we were so afraid of 1984 but what we really got was Brave New World"?

The apathy of the oppressed is a core theme of 1984.

Not really? In 1984 you were made an active participant of the oppression. The thought police and 5 minutes hate all required your active, enthusiastic participation.

Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.

The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.


1984 has coercive elements lacking from Brave New World, but the lack of any political awareness or desire to change things among the proles was critical to the mechanisms of oppression. They were generally content with their lot, and some of the ways of ensuring that have parallels to Brave New World. Violence and hate were used more than sex and drugs but still very much as opiates of the masses: encourage and satisfy base urges to quell any desire to rebel. And sex was used to some extent: although sex was officially for procreation only, prostitution was quietly encouraged among the proles.

You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].

[1] https://gizmodo.com/read-aldous-huxleys-review-of-1984-he-se...


Can someone ELI5 what this does? I read the abstract and tried to find differences in the provided examples, but I don't understand (and don't see) what the "photorealistic" part is.

Imagine history documentaries where they take an old photo and free objects from the background and move them round giving the illusion of parallax movement. This software does that in less than a second, creating a 3D model that can be accurately moved (or the camera for that matter) in your video editor. It's not new, but this one is fast and "sharp".

Gaussian splashing is pretty awesome.


Oh man. I never thought about how Ken Burns might use that.

Already you sometimes see where manually cut out a foreground person from the background and enlarge them a little bit and create a multi-layer 3D effect, but it's super-primitive and I find it gimmicky.

Bringing actual 3D to old photographs as the camera slowly pans or rotates slightly feels like it could be done really tastefully and well.


What are free objects?

The "free" in this case is a verb. The objects are freed from the background.

Until your comment I didn't realise I'd also read it wrong (despite getting the gist of it). Attempted rephrase of the original sentence:

Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.


I'd suggest a different verb like "detach" or "unlink".

isolate from the background?

Even better, agreed!

> Imagine history documentaries where they take an old photo, free objects from the background

Even using commas, if you leave the ambiguous “free” I suggest you prefix “objects” with “the” or “any”.


Free objects in the background.

No, free objects in the foreground, from the background.

Takes a 2D image and allows you to simulate moving the angle of the camera with correct-ish parallax effect and proper subject isolation (seems to be able to handle multiple subjects in the same scene as well)

I guess this is what they use for the portrait mode effects.


It turns a single photo into a rough 3D scene so you can slightly move the camera and see new, realistic views. "Photorealistic" means it preserves real textures and lighting instead of a flat depth effect. Similar behavior can be seen with Apple's Spatial Scene feature in the Photos app: https://files.catbox.moe/93w7rw.mov

From a single picture it infers a hidden 3D representation, from which you can produce photorealistic images from slightly different vantage points (novel views).

There's nothing "hidden" about the 3d represenation. It's a point cloud (in meters) with colors, and a guess at the the "camera" that produced it.

(I am oversimplifying).


"Hidden" or "latent" in a context like this just means variables that the algo is trying to infer because it doesn't have direct access to them.

Hidden in the sense of neural net layers. I mean intermediary representation.

Right.

I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.


Black Mirror episode portraying what this could do: https://youtu.be/XJIq_Dy--VA?t=14. If Apple ran SHARP on this photo and compared it to the show, that would be incredible.

Or if you prefer Blade Runner: https://youtu.be/qHepKd38pr0?t=107


One more example from Star Trek Into Darkness https://youtu.be/p7Y4nXTANRQ?t=61

I was thinking Enemy of the State (1998) https://www.youtube.com/watch?v=3EwZQddc3kY

Basically depth estimation to split the scene into various planes, and then inpainting to work out the areas in the obscured parts of the planes, and then the free movement of them to allow for parallax. Think of 2D side scrolling games that have various different background depths to give illusion of motion and depth.

Apple does something similar right now in their photos app, generating spatial views from 2d photos, where parallax is visible by moving your phone. This paper’s technique seems to produce them faster. They also use this same tech in their Vision Pro headset to generate unique views per eye, likewise on spatialized images from Photos.

Agreed, this is a terrible presentation. The paper abstract is bordering on word salad, the demo images are meaningless and don’t show any clear difference to the previous SotA, the introduction talks about “nearby” views while the images appear to show zooming in, etc.

It makes your picture 3D. The "photorealistic" part is "it's better than these other ways".

Super interesting! So if I understand correctly, all you need to do to have this in your home is gather a bunch of 1:2 tiles, cut them along the diagonal, and assemble them as shown? Awesome


For the other people who don't know: What's a DAW?


Digital audio workstation. It's used for putting together music tracks, but... the scope of that is huge. It's probably best to find some demo/tutorial on YouTube to understand it. Even the ultra compressed bitwig6 announcement should give you an idea https://youtu.be/xJF7i3x46Ec


Digital Audio Workstation, basically all those programs with multiple tracks per music channel, tons of effects and plugins, with MIDI input to be controlled by any kind of musical devices, even macros can be assigned to specific chords or melody snippets.

The audio version of Photoshop.


Digital Audio Workstation, which is a fancy name for a kind of app where you can make a song on your computer. Whether that's a recording of an instrument or voice, or a digital instrument playing from MIDI signal (piano, synth, anything), or recorded samples from various sources, you can put all that together however you want, including all sorts of effects (lotta plugins).


an operating system for music making, usually hosted as an application


In the spirit of that other HN post about brevity in writing:

Who?


A16Z is a very well-known venture capital firm, due to its founders, who were involved in Netscape and Loudcloud. They put a lot of late-stage money into a variety of ‘startups’, with wide-ranging results.


[flagged]


Curious, what part specifically about the debanking bit was a lie?


The idea that the actions taken were politically motivated (and not because cryptocurrency has always been scam-adjacent) was a lie, and also just nonsense.

Banks don't give a shit if you are or aren't woke any more than home insurance adjusters in Florida care if you believe in climate change or not.


Ok, it is a font alright. To me, it looks exactly as all the other fonts I have on my OS already, but I guess that's just how it is if you are not in the font bubble.


I am not in *the font bubble*, but can immediately recognise, say, the Guardian by its typography alone.

It can absolutely be a part of a texts character and reducing this entire field to a *bubble* is a feeble attempt at spinning ignorance as a virtue.


On the screen, in small print, you may not notice the difference between this and, say, Franklin Gothic. But in print the difference is more noticeable, even if not immediately visible without taking a closer look. If you encounter a booklet purported to be form Bank of America set in this typeface instead of Franklin Gothic, it will look very similar but you'll feel that something is slightly off.


Erik Spiekermann is a German typographer who gave a talk called (something to the effect of) “why do we need so many typefaces”. At one point, Erik simply shows a slide with “this is why”, in what is clearly the Marlboro font.

Fonts have different metrics which affect recognisability, legibility, and understanding. There are fonts which evoke a feeling (think heavy metal band), others which are practical (exaggerated letter forms to help dyslexia), and many many many bad fonts too, such as ones with bad kerning which make words like “therapists” read “the rapists” or “morn” read “mom”.

The fonts you have on your computer are all different and have their own strengths and weaknesses which affect you and your perception of what you read, even if you’re unaware of it.


The dyslexia fonts don't work:

> Researchers put the font to the test, comparing it with two other popular fonts designed for legibility—Arial and Times New Roman—and discovered that the purportedly dyslexia-friendly font actually reduced reading speed and accuracy. In addition, none of the students preferred to read material in OpenDyslexic, a surprising rebuke for a font specifically designed for the task.

> In a separate 2018 study, researchers compared another popular dyslexia font—Dyslexie, which charges a fee for usage—with Arial and Times New Roman and found no benefit to reading accuracy and speed. As with the previous dyslexia font, children expressed a preference for the mainstream fonts. “All in all, the font Dyslexie, developed to facilitate the reading of dyslexic people, does not have the desired effect,” the researchers concluded. “Children with dyslexia do not read better when text is printed in the font Dyslexie than when text is printed in Arial or Times New Roman.”

https://www.edutopia.org/article/do-dyslexia-fonts-actually-...


You've given examples of fonts for branding. Those are not everyday use fonts. We don't program with heavy metal band logos.

The fonts we actually use are interchangeable, and people outside the font bubble won't even notice the differences.


I think it's one of those things where you don't "notice" it, but where it nevertheless has an impact. Sort of like someone might not "notice" the fact that there's more butter or salt in restaurant food, but it's subjectively better than the same meal they cooked at home.

For a more directly relevant example, companies frequently A/B test changes to a UI to see which ones people like better. The specifics of those changes would be pretty marginal if you didn't know what it looked like before (like if you're a new user, you wouldn't notice if the notification was red versus purple, or what the wording in the menu is). Despite this, there are some sites that just "feel" better in a way that you can't really describe.

All of this is a long-winded way of saying that I can't tell if I'm looking at Arial, Helvetica, or this Nebula Sans font unless they were side by side (and even then I'd just be saying they're different, not identifying them by name). But I think the site would feel a lot less modern if it were written in Times New Roman. I think you'd notice if it were too hard to read when small, and I think if it looked "bad," you'd at least subconsciously notice that.


Sure, I'd happily take a more readable, less eye strain, less ink consuming, whatever, font. What is announced here, however, is "A versatile, modern, humanist sans-serif with a neutral aesthetic, designed for legibility in both digital and print applications" which is just designer speak for NIH.


You’re ignoring half of the argument. I have given some examples of branding and some examples of real use. “Legibility, and understanding” aren’t branding. “Dyslexia” isn’t branding. Typography is a branch of graphic design with a lot of study behind it. Just like subtle uses of colour and element positioning can change how you interact with, perceive, and feel about an interface, so can typefaces.

Again, just because you’re unable to notice how exactly you’re being affected does not mean you aren’t. You also don’t notice all the ways you’re affected by advertising, but they still work on you.

Yes, of course not every single subtle change to a font makes a huge impact. Just like a single subtle change to a colour’s hue doesn’t. But pronounced changes do, even when you’re unable to put your finger on it.


Sure, readability is a quality of a font, and older fonts can be worse at it. But it's hard to use just readability to justify yet another designer pushing yet another font as a "A versatile, modern, humanist sans-serif with a neutral aesthetic, designed for legibility in both digital and print applications" by just that.

If it were a non-fashion criteria, surely we'd be hitting a local maximum on readability.

I don't need or want my everyday use font to "affect" me, or to "make impact" -- that's the branding world, again, and not aligned with readability.


You are the consumer of fonts. The user of fonts is a designer. If you don't design, this entire debate is outside your field of concern.


This thread started with

> Ok, it is a font alright. To me, it looks exactly as all the other fonts I have on my OS already, but I guess that's just how it is if you are not in the font bubble.

So, yeah. Grandparent tried to justify this new whatever by saying lots of words, none of which really seemed to matter outside of the font bubble.


Calling a field of industry a bubble is being demeaning without justification.

Just because you are ignorant of something doesn't make it insignificant. That the significance is lost on you is YOUR problem.


Except differences run much deeper than that. For example, the amount of characters supported. Many fonts don’t support anything other than ASCII, some support both Latin and CJK. Ligatures, how many weights it has, there are a myriad of technical reasons to pick one font over another.

There is no perfect font, just like there is no perfect framework. You pick what suits you or makes sense for your project. Sometimes you don’t understand your requirements until you try to use something.

> I don't need or want my everyday use font to "affect" me, or to "make impact"

And being aware of the details is the best way to avoid that.


I'm not into fonts at all either, but this looks like more crispier and more readable that I anything saw before. I don't know exactly why.


I wish you luck, your description is unfortunately very vague in the cybersec topic. There will probably hundreds of articles like the one you are specifically looking for and most of them will be published in dark-themed personal blogs written either by an Indian guy or an AI.


I think I recently (few days ago, that is in the last few days of 2024) saw a Vsauce video/short about this regarding a trial trying to use the wisdom of the crowd to get an accurate number for the amount of jelly beans in a large jar. The trial was similarly skewed by "open" predictions, concretely people being allowed to give their predictions while in the presence of their group (friends/family/etc). Unfortunately I cannot seem to find it, but your comment reminded me of that.


In Germany, "means of deception" are listed in section 136a of the Code of Criminal Procedure [1] as one of "prohibited examination methods" among things like induced fatigue, hypnosis, medications or torture. I think that does count as "not allowed".

[1]: https://www.gesetze-im-internet.de/englisch_stpo/englisch_st...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: