By coming from different country their native language (IE what language they heard as infants) more closely resembles that country than America. Note I said 47 million and there are more than 47 million immigrants.
There are also some native born Americans to immigrants who also don’t have English as their first language and People born in China whose first language is English, but that’s ever smaller refinements on a specific estimate.
> ~47 million Americans aren’t native English speakers having immigrated from a non English speaking country.
Your link says 46M total which includes native speakers. So it does not state how many non-native speakers. (not that it would matter as most would be proficient english speakers, just pointing out you're exagerating and your numbers are wrong)
The question of your native language is answered long before any of what you’re talking about here. A 20 year old isn’t time traveling to have different parents when they take an exam.
It's a metal tube that has some plastic around it to make it comfortable to hold. They are basically banning the production of "comfort features", not weapons themselves.
I suppose, but Safari/Webkit shows that you can get what feels like 95% of the way there with static block lists which are ideal when they are sufficient.
They're faster and they're trustless. The only attack surface of a block list is that someone removes their site from the list.
Do I understand the bottom two sections correctly? If I am using ufw as a frontend, I need to switch to firewalld instead and modify the 'docker-forwarding' policy to only forward to the 'docker' zone from loopback interfaces? Would be good if the page described how to do it, esp. for users who are migrating from ufw.
More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.
UFW and Docker don't work well together. Both of them call iptables (or nftables) in a way that assumes they're in control of most of the firewall, which means they can conflict or simply not notice each other's rules. For instance, UFW's rules to block all traffic get overriden by Docker's rules, because there is no active block rule (that's just the default, normally) and Docker just added a rule. UFW doesn't know about firewall chains it didn't create (even though it probably should start listing Docker ports at some point, Docker isn't exactly new...) so `ufw list` will show you only the manually configured UFW rules.
What happens when you deny access through UFW and permit access through Docker depends entirely on which of the two firewall services was loaded first, and software updates can cause them to reload arbitrarily so you can't exactly script that easily.
If you don't trust Docker at all, you should move away from Docker (i.e. to podman) or from UFW (i.e. to firewalld). This can be useful on hosts where multiple people spawn containers, so others won't mess up and introduce risks outside of your control as a sysadmin.
If you're in control of the containers that get run, you can prevent container from being publicly reachable by just not binding them to any public ports. For instance, in many web interfaces, I generally just bind containers to localhost (-p 127.0.0.1:8123:80 instead of -p 80) and configure a reverse proxy like Nginx to cache/do permission stuff/terminate TLS/forward requests/etc. Alternatively, binding the port to your computer's internal network address (-p 192.168.1.1:8123:80 instead of -p 80) will make it pretty hard for you to misconfigure your network in such a way that the entire internet can reach that port.
Another alternative is to stuff all the Docker containers into a VM without its own firewall. That way, you can use your host firewall to precisely control what ports are open where, and Docker can do its thing on the virtual machine.
Its a cornerstone of building cost efficient networks. People pay for a certain sized pipe, what they pay also covers the rest of the ISPs networks and costs. With no oversubscription the ISP would need maybe 20-30x more infrastructure, do you think it would have an impact on what you pay?
Not sure why I have to say this, but, networks are not airplanes.
Not oversubscribing is a cost multiplier at every level. 1 million 1 Gbit customers in a city is going to need 1000 100Gbit connections out of that city and the same for transit, and that will have no impact on pricing? And everything is on average used at 1% of capacity.
If my ISP can only afford to supply me with 1TB of transfer at 1Gbit, that's fine. They can put it in the adverts, the contracts, and the pricing. For customers who want 10TB of transfer, they can offer a higher cost option.
And if they choose to gamble, advertising and entering into contracts promising "unlimited data", which they think will be more profitable across their entire customer base? Then they've got to do supply what they promised in the adverts. They chose to gamble that way, and if they lose money gambling that's their business.
You have that on mobile subscriptions usually, heavy users pay more and low usage users are not subsidizing them.
I take you are fine with paying 10x or even more for your no oversubscription Internet connection then?
Oversubscription is not gambling. The way it works after your last mile connection is that ISPs look at link usage in their network, city level distribution, city to city, transit, peering, etc, once it reaches 60-80% utilization at peak you start looking at adding more capacity. Bad ISPs (most US ISPs) will let this go too far though.
That's not the same thing. A more adequate comparison would be to say that you promised 30 people they can have a burger, but can only produce 5 burgers per minute. If everyone show up at exactly the same time, you won't be able to satisfy them all (they'll have to wait). But overall you can consider that the probability of such thing happening is small enough to take the "gamble".
It might be if traffic had sudden jumps of like 30%, but it doesn't and there is headroom available. Traffic increases slowly over time and you have plenty of time to upgrade your network.
> 10gbps transit at the rock bottom rate costs $600/mo.
So then 300Mb/s transit, which is around the services these incumbent dinosaur ISPs are offering, is $20/mo? And $20/mo is only 10-20% of their large monthly bills? You're basically proving the opposing argument here in the general case [0].
For reference, I've asked my 1Gb/s municipal provider if they have bandwidth caps, and they told me "no" and that they are not concerned with how much bandwidth I use.
[0] The specific case is that most users are streaming video from large entertainment providers, for which the ISP isn't even paying transit but rather merely the electricity and rack units of CDN edge boxes.
The point of oversubscription is maintaining a network that keeps costs low while providing a good service without congestion. They monitor their network (not your last mile connection, everything else) and once links start reaching 60-80% of capacity at peak times you start adding more capacity. Bad ISPs (like most US ISPs) let this go way too far though.
It appears that your ignorance on the topics of infrastructure and the advancement of technology over the past five decades makes having a useful conversation impossible. Not every cable in the ground was installed with today's state of the art technology. Enjoy your apparently unthrottleable internet connection.
Overbooking and oversubscription are inherently very different.
Flying is a one-time service with a specific and fixed point at which the service is provided. Its peak usage is the expected usage.
Internet access is a continuous service promise where it's nonsensical to expect the provider to predict exactly when every customer would want to use it. The peak usage is not the expected usage.
First: No, they're not. That is an unreasonable expectation divorced from the reality. What exactly do you think would happen if everyone in town switched off and on their AC-powered devices at the same time? What do you think would happen if everyone in town moves to the same street and starts using their cell phone to stream 4K videos at the same time? Do you seriously think it's reasonable to expect every system to deliver at its peak with arbitrary demand and load on it?
Second: If you're going to play the "I paid for this" game: this stuff is generally in the contract anyway. It is the level of service you paid for. The overbooking possibility? You paid for it, it was in your contract. Throttled service? That was in your contract too. You're getting what you paid for.
> If everybody watches the superbowl at the same time I'd expect the power grid not to fail.
"I get what I want immediately" to "the system won't fail" is a nice way to shift goalposts. If everyone shows up to their flight then the flight won't crash, it'll depart just fine with the capacity it has and offer everyone else on the next available flight. You know, the same thing that happens when the power grid is turning back on. They do it one piece of the grid at a time. Which results in you getting less than what the person next door paid for. Because that's reality.
> I am also in europe. I don't get throttled service and what you say is not in my contract. What do you say to that?
When there are a ton of people crammed in the same location overloading the network, you get throttled, whether intentionality or not, whether you like it or not. There is no way on Earth that you being in Europe somehow makes you immune to reality.
The rock bottom rate for IP transit is $60/gbps. None of the infrastructure cost is included here.
And that’s with Hurricane Electric. They are a bit notorious for having probably the worst routing in the industry, but they are also the cheapest in the industry.
It’s nowhere near as simple as “large fiber pipes capable of accomodating spikes”.
There are very good reasons why hyperscalers are building their own intercontinental undersea fiber networks. So they don’t have to pay for the _extremely_ expensive intercontinental transit.
Last I checked renting a wave capable of doing 400gbps between Amsterdam and New York was close to $80k/mo. A wave is basically a dedicated wavelength of light guaranteed to you and only you.
You don’t want your ISP to oversubscribe? Become your own ISP. Get an AS number. Get your own IP space (both of these can be done on the cheap, /36 of v6 is basically free and /24 of v4 can be had for $100 a month). Get a BGP session with a transit provider. Pay them for transit.
Get IXP links so you have direct access to AWS, Google and Netflix. Save on the transit costs there! But the IXP peerings aren’t cheap and on a small scale will certainly cost more than transit.
Congratulations, you’re now paying $1000 a month for 1gbps guaranteed. It gets cheaper with scale, but scale also increases your infra costs.
Everyone would be on 10mbps if ISPs weren’t allowed to oversubscribe.
I became my own ISP as a hobby (https://bgp.tools/as/200676). This hobby costs me $200/mo, and I don’t have any real transit, just cheapo VPSes in locations convenient for me.
Wanna know what my residential ISP whom I pay €19/mo for 1gbps residential service quoted me for a BGP session at my home on a business connection? €9800 in setup fees, €2000/mo, min. 3 year commitment + transit. Of course that was a “fuck off, we just don’t want to do this” quote, but the only alternative I have here is to pull my own fiber.
From the first google result (although this was 5 years ago): “Europe gave internet service providers the right to throttle online traffic to prevent congestion as network demand spikes amid coronavirus stay-at-home and quarantine orders. Netflix and YouTube have already agreed to switch to standard-definition streaming in Europe to reduce bandwidth demand.”
> Dedicated Internet access is a thing, but it's expensive; and I'd argue that even that is oversubscribed if you go far enough up the chain.
The only way to get internet access that’s not oversubscribed is by renting (or pulling your own) layer 0. By that I mean either renting a wavelength between certain PoPs or just pulling your own fiber.
Android is a repudiation of traditional “distro” Linux userspace. I think it’s Android approach that has the best chance of reaching mainstream adoption in the laptop form factor with consumers.
I'm not saying "android on a laptop is the way of the future". I'm saying the Android model, to sweep aside the status quo distros and start fresh with new approach to userspace, is the path I see most likely to bring Linux to the masses in laptop/desktop form factor.
If the people you see using Linux on the laptop are developers, then I don't think they count as "consumers" - they're on the production side of things! I don't know any consumers who use Linux on a laptop. I last gave it a shot in 2018 but decided it's not for me.
In my experience, you are still much more likely to get broken hardware support when updating kernels on Linux, though I have no idea why. I almost never see or experienced stuff like my laptop camera not working at all after an update on Windows, but it does happen on Linux. The same goes for GPUs, which can break if you update your kernel often.
It's not necessarily the kernel's fault but it's something that does happen often using distros like say, fedora.
I think the main difference is that Linux supports and runs on almost everything, but in a lot of use cases it will be a specific version of the kernel that will be used for a product's lifetime (for embedded products) or have every update very heavily curated through a third party like Red hat. In those cases, Linux is rock stable, far more than Windows can be.
But for regular, personal usage, I genuinely think that Linux does break more often.
I don't disagree at all. I'm completely allergic to ads at this point, hence why I use Linux a lot more :). But let's be honest, for the average user, the choice is super obvious. Yes, windows shows you ads in the start menu, but it "works" and constantly so. They don't need to necessarily worry about a windows update borking their GPU drivers, or having to use grub to boot into an older kernel because some proprietary driver stopped working after an update.
I personally know enough to fix or even prevent those issues, but for the vast majority of desktop users, even less casual users like gamers, they really don't care about the stuff you listed. It's sad, because it absolutely ruins the experience of what would otherwise be a great OS (the core of windows is great imo, just not everything on top of it).
is somewhat true. State of the video drivers from AMD are not good on Windows and atrociously bad on Linux. NVIDIA are drivers are barely of the acceptable quality on Linux.
And that's just when considering core system components. But non-technical users will expect any odd-ball peripheral they pick up at Office Depot or Best Buy to work as advertised out of the box, and it probably will not.
This could have been a parallel construction mechanism, if they had sources too sensitive then they could feed data via this project and have it successful.
Bonus points for having enemies trying to replicate the technology and observing that progress and espionage around it.
Like space race? Clever theory, but does not account for the fact it actually works.
You just need first hand experience, otherwise really hard for you to see that. Try for yourself. My answer gets you to your first try: https://news.ycombinator.com/item?id=42528680
Also your theory fails on evidence: studies, testimony of people about psi/intuition, thousands of people's sessions on psi/RV, discrete use by law enforcement & business.
But disinformation doesn't accomplish much if the adversary disbelieves it, and ignores it—as anyone with an ounce of common sense would. If you're trying to deflect from your real information source, it helps if the fake one you invent is a plausible distraction.
Hanlon's razor says the unfireable career bureaucrats overseeing this project were genuinely incompetent, and authentically stupid.
I think a lot of it was Cold War paranoia. The US government got into a lot of weird stuff like MKULTRA just because there were rumors the Soviets were working on the same thing, and no one wanted to risk the possibility, however remote, that there might be something to it.
Also probably money laundering. Apparently there were a lot of connections between the USG's various psi programs and Scientology.
psi/RV predates the Cold War, the USA, and Western culture, and there's zero doubt it works, but that will be very hard for you to see unless you get first hand experience. My answer can get you from 0 to 1: https://news.ycombinator.com/item?id=42528680
It's a self-revealing non-dismissal to associate with something you dislike. Maybe your hick uncle is poor, or a KKK member. Should you be punished, or condemned to poverty? But facts are: the Scientology connection I think is Hal Puthoff who was temporarily a member to study that organized religion apparently. So? US frontier science has a history with occult, such as NASA's Jack Parsons. In the real world, totality of programs is much bigger than 1 dude, unless you're fixated on that aspect, then it would seem to all revolve around that hahaha! :)
Another way to look at this is all of this skepticism is a very monocultural, in fact a very white, and by association with the faux-confident dismissals here a 'white-supremacist' viewpoint to take. While many cultures today embody pseudoscientific materialist dogma in a rush to embrace ‘scientific modernity’ , there’s also a widespread acceptance of psi phenomena (by many names) among Chinese, Indian, Central Asian, African and South American cultures.
There is a book about it, called PSI. They started it because the Soviets also leaked info that their telepathy program, necessary for submarine comms, was successful. Also their aura viewer and what else.
So they assembled a team of scientists and psychics and learned that the success rate resembled the random sample. Some psychics were also good magicians and scammers
Reini I think you would be great at this. Internally directed. Smart. Perceptive. And with your eyesight problem you likely already subconsciously enhanced psi skills to compensate - it’s often correlated, sort of like light-weight 'sort of blindsight'.
psi/RV is 'uncommon sense' which is probably why so many of you have trouble accepting it! :) Even tho it's commonly learnable.
Your thinking is all so theoretical and divorced from the reality that there is zero doubt psi/RV works. It will be hard for you but try it for yourself and you will see: https://news.ycombinator.com/item?id=42528680
Also, regarding history your idea counterfactual as US made a program in response to USSR.
It seems that the American efforts were the victims of disinformation rather than the instigators. They were started after reports that the Soviets were already engaged in such research.
There was no disinfo fundamentally. There's zero doubt psi/RV works. Try it yourself, or fail to understand. My answer can help you get to 1st session: https://news.ycombinator.com/item?id=42528680
Most folks neg-commenting here would be perfect candidates for this: unafraid of social ostracism, internally directed. Interested in it. Competitive.
> What such laws actually do is entrench big companies who raise their rates across the board to subsidize the low income plans
Other places have cheaper internet driven by market forces. The low income plans very likely arent loss generating.
The high prices for internet are likely a result of lack of competition, so perhaps the issue is companies can already charge what they want with impunity.
> Never mind the privacy issues of citizens having to spill their annual income to private companies for the means testing, or the bureaucratic runaround of somehow having to prove income levels ahead of time.
Do you have any links to the actual process? I was curious but couldnt find any. Either way, might not be an issue considering you need internet. Might be the difference between having employment and proper education or not.
> The right answer to helping poor people is direct subsidies for whatever qualifying service they choose.
This just means giving money to ISPs for the same result.
If you really think more government action is needed then why not ask gov to address ISP monopolies or introduce municipal providers.
> If you really think more government action is needed then why not ask gov to address ISP monopolies or introduce municipal providers.
Municipal providers? yes, please. I am on municipal fiber, and it is fantastic. Fixed monthly price, little downtime, fantastic local tech support (not that I've ever had to call them). I've been in conversations with friends where they're going around the table complaining about various aspects of their dinosaur ISPs and I'm just sitting there twiddling my thumbs. Internet access is now just a solved problem for me. Municipal Internet is a great example of government taking on a new mandate and directly providing service across the board to everyone. (I responded to this point first so you can hopefully see that I am arguing in good faith here, not just shilling FUD for dinosaur ISPs)
Creating more competition in general is much less straightforward to discuss or implement. However the law we're talking about is quite similar in spirit to the "franchise agreements" that required cable companies to build out entire cities/towns (despite some areas being a loss) in an exchange for a guaranteed monopoly. It's exactly the type of thing that hurts competition.
> Other places have cheaper internet driven by market forces. The low income plans very likely arent loss generating.
I agree that Internet access can be provided much cheaper than it generally is in the US. However the high price market failures in the US seems to be from construction costs, extra layers of middle management, and bureaucracy rather than straight profit going to owners. On the balance sheet of ISPs, those low income plans will indeed be losses (wrongly or rightly).
> Do you have any links to the actual process? I was curious but couldnt find any. Either way, might not be an issue considering you need internet. Might be the difference between having employment and proper education or not.
I was speaking about the general shape of these public-private "synergies". Given how much of the responsibility has been placed on the companies here, it's a reasonable assumption.
How does a strong coercive incentive to give private information to companies make it into a non-issue? Especially in a country with little data protection regulation, and little appetite for it? I would say the strong incentive makes it more of an issue. And people needing help but not actually getting it because they don't want to spend lots of time jumping through paperwork hoops is a well known problem.
> This just means giving money to ISPs for the same result.
No, it means keeping the power of choice with the ISP subscriber so they can weigh their own needs across everything the market offers. I wouldn't be surprised if when building out new last-mile connections to serve $15 plans is unprofitable, ISPs will respond by offering a special plan that involves a 5G router to meet their legal requirements.
It also means spending public money more effectively - on things the consumer market is already providing, rather than creating a new bespoke offering that meets the desires/constraints of government.
alright, radar at angle, unknown surface but all looks the same, no landmarks to track against, unknow radar properties of surface, how do you translate that into directional speed?
A doppler shifted reflection requires a feature, significant in size to the wavelength, to reflect off of, somewhat perpendicular to the direction of movement. For RADAR, I don't think a smooth sand field would have such a thing. Doppler LIDAR could probably detect it, but I naively assume lidar is hard in dusty environments, without oodles of large moving optics required to penetrate a dusty, optically clear, window.
To get a Doppler-shifted reflection you can also... move. Ground-based surface radars can work (somehow) with just no Doppler info, cataloguing, mapping fixed reflectors and very often just building ground maps. If your radar has multiple vertically and horizontally spaced receptors (eg. phased-array) or you can aim your beams, you can also somehow build a 3D map or at least map surface features/masks. This is before anything moves.
> Ground-based surface radars can work (somehow) with just no Doppler info
> you can also somehow build a 3D map or at least map surface features/masks
The problem is this case was that there was nothing to reflect off of, except for a planar field of sand. You can't know how fast you're moving over a bare plane, without a feature being on that plane, that you can observe.
or perhaps that sum is incorrect, or perhaps it was state actors with power to force algorithm changes on TikTok or to tiktok via other means without payment.
external state actors interfering with elections is a perfect reason to invalidate.
That’s why I was asking. I’ve got this number from a German article on the court ruling. I do not know if this is in fact the number specified in the ruling, or whether this was a “leak”, or “misinformation”. So I was hoping somebody could elaborate a bit more as I don’t speak Romanian and haven’t really followed the whole thing, and the OP I replied to mentioned “millions”
Edit: https://archive.fo/tAcG1 (nzz.ch paywalled) is the original source, I would argue NZZ is very trustworthy. They quote the intelligence report. You would need to translate to English. Maybe you have a different source that puts this all in question, which I would appreciate
wikipedia. You are a bit off...
As for native you have US+UK+Canada+Australia+NZ+Ireland. So more then your 380M.