Hacker Newsnew | past | comments | ask | show | jobs | submit | zekica's commentslogin

They have different goals:

GrapheneOS wants to make a FOSS Android with the security model that makes it hard for any bad party to break into the phone.

LineageOS wants to make a FOSS Android that respects user's privacy first and foremost - it implements security as best as it can but the level of security protections differs on different supported devices.

Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS - differing in that third parties with local access to the device can still brute-force their access whereas with GrapheneOS they can't - unless they have access to hardware level attacks.


that is simply wrong.

GrapheneOS is both in terms of security and privacy the best but currently only supports pixel phones.

LineageOS is trying to support as many devices as possible still with lot of google connections and missing security updates.

>Good news is that if you have a boot passphrase, it's security is somewhat close to GrapheneOS

its not anywhere close https://grapheneos.org/features


This is the correct response. I use both GrapheneOS and LineageOS. But LineageOS focus is on delivering newer versions of Android to many phones abandoned by their OEM. GOS exclusively focuses on security and privacy. If you want a reasonably secure phone but don't want Google or Apple inside your device, your best bet is GOS.

I am overwhelmed by the specificity of your demonstrated knowledge on this topic.

How can LOS's security be somewhat close to GOS if it's worse than OEM? LOS lacks verified boot, hardware security features, it's often behind is security patches.. With "advanced protection" enabled stock OEMs are even more secure, but GOS is even more secure still. When it comes to EOL devices, LOS may be more secure than OEM depending on your threat model.

https://eylenburg.github.io/android_comparison.htm


It very much depends on your personal threat model, if you expect targeted attacks LOS doesn't hold a candle to GOS, but at least for my threat model verified boot and hardware security features outside of my control don't have a substantial security benefit.

Obviously it would be preferable to have up to date security patches, but as long as there are plenty oven even more easily exploitable devices, and there is no WannaCry level attack ongoing it is a risk I'm willing to accept for more user freedom.


Interesting... I rarely form words in my inner thinking, instead I make a plan with abstract concepts (some of them have words associated, some don't). Maybe because I am multilingual?

English is not my native language, so I'm bilingual, but I don't see how this relates to that at all. I have monologue sometimes in English, sometimes in my native language. But yeah, I don't understand any other form of thinking. It's all just my inner monologue...

Multiple local networks while still using SLAAC.


Yeah, Kate is great. New versions integrate nicely with LSPs and while not as fast as Vim it's faster than VSCode and most gnome based code editors.


I used it. It's an (ugly) functional programming language that can transform one XML into another - think of it as Lisp for XML processing but even less readable.

It can work great when you have XML you want to present nicely in a browser by transforming it into XHTML while still serving the browser the original XML. One use I had was to show the contents of RSS/Atom feeds as a nice page in a browser.


I would just do this on the server side. You can even do it statically when generating the XML. In fact until all the stuff about XSLT in browsers appeared recently, I didn't even know that browsers could do it.


Converting the contents of an Atom feed into (X)HTML means it's no longer a valid Atom feed. The same is true for many other document formats, such as flattened ODF.


Is an XLST page a valid atom feed? Is it really so terrible to have to two different pages -- one for the human readable version, and one for the XML version?


Yes, an <?xml-stylesheet href="..."?> directive is valid in every XML document. You can use CSS to get many of the benefits of XSLT here, but it doesn't let you map RSS @link attributes to HTML a/@href attributes, and CSS isn't designed for interactivity. That's a rather significant gap in functionality.

It is rather terrible to have two different pages, because that requires either server or toolchain support, and complicates testing. The XSLT approach was tried, tested, and KISS – provided you didn't have any insecure/secure context mismatches, or CORS issues, which would stop the XSL stylesheet from loading. (But that's less likely to spontaneously go wrong than an update to a PHP extension breaking your script.)


I have done same thing with sitemap.xml.


I can: Gemini won't provide instructions on running an app as root on an Android device that already has root enabled.


But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots?

Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself.


> But you can find that information regardless of an LLM?

Do you have the same opinion if Google chooses to delist any website describing how to run apps as root on Android from their search results? If not, how is that different from lobotomizing their LLMs in this way? Many people use LLMs as a search engine these days.

> Why are we assuming just because the prompt responds that it is providing proper outputs?

"Trust but verify." It’s often easier to verify that something the LLM spit out makes sense (and iteratively improve it when not), than to do the same things in traditional ways. Not always mind you, but often. That’s the whole selling point of LLMs.


That's not the issue at hand here.


Yes, yes it is.


The issue is the computer not doing what I asked.


I tried to get VLC to open up a PDF and it didn't do as I asked. Should I cry censorship at the VLC devs, or should I accept that all software only does as a user asks insofar as the developers allow it?


If VLC refused to open an MP4 because it contained violent imagery I would absolutely cry censorship.


And if VLC put in its TOS it won't open an MP4 with violent imagery, crying censorship would be a bit silly.


Non-deterministic means random - that's the definition of the word. The weather forecast is also random - in fact, weather forecast is (if you simplify it too much) an average of several predictive (generative) models.


> Non-deterministic means random - that's the definition of the word.

That's not really the defintion. Non-determinism just means the outcome is not a pure function of the inputs. A PRNG doesn't become truly random just because we don't know the state and seed when calling the function and the same holds for LLMs. The non-determinism in LLMs comes from accepted race conditions in the GPU floating point math and the PRNG in the sampler.

That's besides the point, but we could have perfectly deterministic LLMs.


Inconsistent seems a more accessible word. It gives inconsistent results.


ChatGPT isn’t random though.

If you ask it what a star is, it’s never going to tell you it’s a giant piece of cheese floating in the the sky.

If you don’t believe me, try it, write a for loop which asks ChatGPT, what is a star (astronomy) exactly? Ask it 1000 times and then tell me how random it is versus how consistent it is.

The idea that non deterministic === random is totally deluded. It just means you cannot predict the exact tokens which will be produced but it doesn’t mean it’s random like a random number generator and it could be any thing.

If you ask what is Michael Jackson the entertainer famous for it’s going to tell you he’s famous for music and dancing. 1000/1000 times, is that random?


> If you ask it what a star is, it’s never going to tell you it’s a giant piece of cheese floating in the the sky.

Turn the Top-P and the temperature up. Turning up the Top-P will enable the LLM to actually produce such nonsense. Turning up the temperature will increase the chance that such nonsense is actually selected for the prediction (output).


Sure but nobody is doing that, are they?

I'm talking about the standard settings, and infact GPT-5 doesn't let you change the temperature anymore.

Also, that's not really the point. Humans can also produce nonsense if you torture them until they're talking nonsense, but that doesn't mean humans are "random."

LLMs are not random, they are non-deterministic, but the two words have different meanings.

Random means you cannot tell what is going to be produced at all, i.e. a random number generator.

But if you ask an LLM, is an Apple a fruit, answer yes or no only, the LLM is going to answer yes, 100% of the time. That isn't random.


I agree with everything that you've stated.


Exactly, remote attestation is only acceptable on your own devices with remote attestation servers that you control.

For example, it would be completely fine to implement remote attestation where devices issued by companies to employees verify their TPM values with company's servers when connecting via VPN.

All other such activities directly infringe on ownership rights.


I don't see the value of remote attestation period. Especially when we talk about the mobile world which is a jungle where even the manufacturer itself doesn't have the full picture of all the code running on the device.

Yeah sure it's guarantees that the device is more or less similar as from the factory... and then what? What am I supposed to do with that information?


It can be valuable on devices *you own* with servers *you own* when the devices are not physically present (or even if they are).

You can get PCR values and decide if the device you are talking to is tampered with. That way, you can set a higher bar for hackers.

This is completely different to what this topic is about, I'm just saying that there is a case where it can be useful.


Am I the exception? When thinking I don't conceptualize things in words - the compression would be too lossy. Maybe because I'm fluent in three languages (one germanic, one romance, one slavic)?


Our brains reason in many domains depending on the situation.

For domains built primarily on linguistic primitives (legal writing), we do often reason through language. In other domains (i.e spatial) we reason through vision or sound.

We experience this distinction when we study the formula vs the graph of a mathematical function, the former is linguistic, the latter is visual-spatual.

And learning multiple spoken languages is a great way to break out of particularly rigid reasoning patterns, and as important, countering biases that are influenced by your native language.


How exactly is this "reducing the security level to those of passwords"? For example: you can't use a passkey on attacker's web site even if you have a plaintext copy of the private key.


I'm not following. The issue is about it being used for the site the private key is for. The attacker's site is irrelevant here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: