I think he means if you're serious about dark mode then design two graphs one light, one dark. Solving "how do I turn a well designed light mode image into a dark mode image" is an AI task that would be a nice research paper, not something a designer can hack together with a bunch of if-then rules.
I don't know OSX very well these days. Is that... is that actually installing a new global SSL trust root? Doesn't that mean ProtonMail now can seamlessly MitM all SSL connections on that machine?
Please tell me I'm reading that wrong, because I don't recall doing this for ProtonVPN on linux.
In the instructions they ask the user to "always trust" the cert for all use cases, including SSL. If you do that, any app that uses OS certs can be MITMed. It should be enough to trust the cert for IPSec only.
My knowledge is limited in crypto, but I'm pretty sure you should never trust a root cert (even for "IPSec only") unless it carries responsibility and public scrutiny equal to or greater than a standard CA. (Unless it's the owner of the device [including you] or a close associate you trust.)
Indeed, it is enough to trust the cert just for IPSec, and we have updated the article to reflect that. We also have native applications on macOS so the manual IKEv2 setup is not the recommend method of usage of ProtonVPN.
It's relatively easy for one person to source clean power themselves, just means that technically your payment goes to a wind farm instead of the grid as a whole, but really doesn't change much for the grid. It's much harder for everyone to get clean power, then you get very advanced issues balancing the variable nature of renewable generation.
In this case I believe it means carbon-free, so Nuclear is still technically viable though very unlikely any new Nuclear will be built. Will be mostly existing hydro, nuclear and geothermal with new solar & wind phasing out gas plants.
That may be a bad idea — you have entered into a contract, one that likely doesn't account for that sort of "cancellation" and so gyms could legally keep charging you, consider the account delinquent for awhile, then close it and sell that debt to a collection agency. On the other hand 24h fitness auto-canceled my membership when I didn't go for a little while, so at least some have some kind of incentive to not have people hate them.
you have entered into a contract, one that likely doesn't account for that sort of "cancellation" and so gyms could legally keep charging you, consider the account delinquent for awhile, then close it and sell that debt to a collection agency.
My wife and I own two gyms, and no matter how easy we try to make it for people to cancel - you can literally email us at 9pm the day before you get charged and if we see it we'll cancel it - people still treat the charge back functionality like an "oops didn't mean it lol" thing.
Huge gyms like Gold's or PF have these oppressive terms because they have to, otherwise at their scale they'd be getting hit with multiple charge backs every day at every location. We have less than 300 members between our two facilities and still see one every month or two, mostly from people who don't understand what they're actually for - fraud, subpar delivery of whatever you purchased, or not receiving whatever you purchased - not as an easy refund button so you don't need to take 10 seconds to write an email.
The most we can hope to get out of fighting a charge back is the money back (less the additional charge back review fee which is $15-30 depending on merchant) and a very high likelihood of a 1-star Google and/or Facebook review. And that's best case scenario when the merchant is willing to side with us instead of the customer, which is why we're forced to now have contracts detailing cancellation policies. It will never happen, but I really wish consumers had to pay the charge back fees when they use the feature fraudulently like many do.
> If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?
Sounds a little obvious to write it out, but it protects against someone stealing your password some way that the password manager / unique passwords doesn't protect you against. Using a PM decreases those risks significantly, mostly because how enormous the risks of password reuse and manual password entry are without one, but it certainly doesn't eliminate them entirely.
It's not at all obvious to me, because 1Password passwords are stored in the exact same places that 1Password-managed TOTP codes are. You might as well just concatenate the TOTP secret to your password.
Having a TOTP secret would protect against theft of credentials in transit. The TOTP is only valid once, so that credential exchange is only valid once. They wouldn't be able to create any additional login sessions with the information they've attacked. However, there's a good chance if they could see that they might also be able to see a lot of other information you're exchanging with that service.
It creates a race condition in transit - if they can use the code before you, then they win. I can intercept at the network level, but also via phishing attacks - there is no domain challenge or verification in TOTP.
I know having someone malicious get into your account multiple times vs once is likely worse, but its hard to quantify how much worse it is - and of course using that one login to change your 2FA setup would make them equivalently bad.
Not quite exactly "equivalently bad", since a user is more likely to notice a 2FA setup change than they are a phishing site's login error and then everything working as usual, but yeah, perhaps it's splitting hairs at that point.
which is why I'm wary of using my password manager for OTP, and use a separate one. Not sure if it's too paranoid, but it doesn't make sense to me to keep the 2 in the same place.
There appear to be two points being conflated — 1/ 2FA via secrets stored on a separate device from your primary device with a PM provide more security than those stored on one device, and 2/ once you use a PM with unique password for every site, much of what OTP helps with for is already mitigated.
Both seem true, and what to do to protect yourself more depends on what kinds of attacks you're interested in stopping and at what costs. Personally, PM + U2F seems the highest-security, fastest-UI, easiest-UX by far — https://cloud.google.com/security-key/
This is the thing I struggle with: name a scenario where you would have your unique site password compromised but not have at least 1 valid 2FA code compromised at the same time.
The best answer I have for where TOTP can provide value: you can limit a potential attack to a single login.
I wanted to say you could stop someone doing MitM decryption due to timing (you use the 2FA code before they can), but if they're decrypting your session they can most likely just steal your session cookie which gets them what they need anyway.
Logging in to a site on a public computer and the browser auto-remembers the password you typed
A border agent forcing you to log into a website (this scenario only works if you leave your second factor, which will most likely be your phone, at home)
Usually in a higher security environment, we'll make sure the authenticator is a separate device (phone or hard token) and expressly forbid having a soft token on the same device that has the password safe.
At least some of that seems in the past. Presently, Schindler has good coverage of the details here [1], which details what a generous reading of mkempe's point is trying to say -- let's not go over-board.
I haven't read all the indictments and so might be wrong, but at this point I think it would be speculation. Time will almost certainly tell for sure. The wikipedia definition of the term seems easy to agree on — https://en.wikipedia.org/wiki/Asset_(intelligence)
Manafort's actions in Ukraine paint him as a Russian asset in my mind (http://time.com/5003623/paul-manafort-mueller-indictment-ukr...). You're correct that this doesn't prove he was acting on behalf of Russia when he offered to work on Trump's campaign for free though. That is speculation on my part, but seems fairly likely.
What does NaCL have to do with this, it's a cryptography library?
The issue is that 99.99% of USB devices aren't designed with the possibility of hostile payloads coming from the host, so the security rests entirely on the webusb permission dialog. Which should be presented as "grant this website administrative access to your computer" but isn't.
Native Client had layered sandboxes and was still exploited. I suspect that sandboxing, in general, is not right; we must find safety and correctness by construction, not by ad-hoc rules or policy or permissions.
This is a million dollar question, but it was answered long time ago: there is no substitute for a programmer who knows what he is doing.
This is something most companies can't do. Small co., can pull it out that for some times, but as companies grow, the temptation to "simply make money" overwhelms even most principled person.
Aside from the "no HID" (does a YK even count as a HID if you turn off the default slot 1 functionality?), was the proxy designed to have a firewall / sandboxing of some sort? Google engineers have done some incredible things and while ambitious, it seems this kind of thing is well within their reach.
Was there some work-around I'm missing or did they literally go "yeah this website can send anything to the YK device directly, waht could go wrong?". Because the folks at Google Security are definitely smart and many orders of magnitude more experienced than me, and that's a vuln even I can understand / see the problem with so something institutional must have gone way wrong if WebUSB shipped on the stable release without some kind of block-U2F-forgery filter.
As far as Yubico, I get that they are doing something pretty hard in the hardware / product-market-fit domains, and I respect that and I want them to succeed, but they appear to be seriously dropping the ball on the software part of their product [1], as well as "simplicity breeds security". They could do so much better on the actual UI/UX if they piece-by-piece copied the setup UX of a "smart" vacuum cleaner.
1. I emailed & on-site support ticket submitted them days ago about some of their certs having expired on 2017-05-10, and have gotten not a peep in response & no fix in sight. Did nobody set a team calendar reminder and is nobody responsible for checking it on a monthly / quarterly / at the very least end-of-year cycle? That seems pretty elementary "underwear goes inside the pants" kind of security competence.
Of course U2F devices should be excluded from the list, and there should be some warning text about "do not allow important devices on random websites", but that doesn't seem like a huge deal.
Playing devil's advocate here (because I do agree this would be ridiculous but I think this is worth pointing out), but you can never completely rule out tricking the user. They could always download a file and run it to bypass the browser or something. So the question really is how easy it is to trick the user here.