Hacker Newsnew | past | comments | ask | show | jobs | submit | more hardolaf's commentslogin

Wow. So you really know nothing about the technology and are just spreading fear. The Chernobyl exclusion zone is mostly safe for people now outside of the fact that Russia is current bombing Ukraine.

The issue with cleanup at Fukushima Daichii is one of money and political will, not one of technology. We've had the ability to clean up nuclear accidents since the 1950s.

Also, the future of power is increasingly looking like LNG plants which pump only slightly less radioactive carbon into the atmosphere than coal plants do.


  > with cleanup at Fukushima Daichii 
To add a small note here: the background level of radiation is fairly safe in most of the region. The danger (including in the Chernobyl region) is more about concern of small radioactive particulate. Things like your vegetables in your garden could become deadly because they formed around a hot material that was buried in the ground. Same can happen with rain runoff.

These are manageable, but expensive and still take care. You'd still want to arm everyone with a detector and get them to be in the habit of testing their food and water (highly manageable for public water or food).


The Chernobyl exclusion zone is relatively safe... to short, limited tours. There are radioactive and toxic particulates all over the place. Things like Cesium-137, which is both radioactive and toxic. Artifacts irradiated in the initial meltodwn and radioactive release (eg vehicles, buildings) remain dangerous to this day, like there are machine graveyards that are absolutely forbidden to entry for safety reasons.

> The issue with cleanup at Fukushima Daichii is one of money ...

Yes, about a trillion dollars. That's the point.

As for technology, I believe the removal of fuel rods and irradiating sand bags has only begun (with robots) in the last year. I don't believe they've fully mapped out what needs to be removed. It's not just the fuel but also the structure, such as the concrete pedestal the reactor was on (and melted through to).

Otherwise, you kinda make my point: hand waving away serious and expensive disasters with fervor bordering on the religious to essentially dismiss me as some kind of heretic.


Money and political will are in short supply everywhere. Who's to say you'd find it in the US after an accident? And why even bother when solar is cheaper and doesn't come with the same risk?


I'm going to say this in a not nice way: that's a you problem.

You willingly use a distribution which purposely ships out of date software based on some misguided philosophical belief that such a behavior makes the system better or more stable. In reality, it just means that you're running out of date software with security vulnerabilities, bad driver support, and even worse distribution maintainer half-assed patches to fix the aforementioned vulnerabilities.

I'm not saying that you should switch to Arch Linux, but there is a wide gap between RHEL and Debian based distributions and a continuously rolling distribution. There are distributions that update weekly, biweekly, monthly, quarterly, etc.


AMD officially supports precisely three Linux platforms for current ROCm:

1. Ubuntu 24.04.4 with kernel 6.11

2. Ubuntu 22.04.5 with kernel 6.8

3. RHEL 9.6 with kernel 5.14

Anything else, like your preferred rolling release distribution, is entirely on your own.


Sure that makes sense from a support perspective. My FPGA tools also only support a small number of OSes. But they, like ROCm, run fine on pretty much anything as up to date or newer.


>I'm going to say this in a not nice way: that's a you problem.

I always prefer this.

>I'm not saying that you should switch to Arch Linux,

Especially when you Arch isnt supported at all by any version and quite likely to not even work as a video card. Manjaro also not supported.

>ut there is a wide gap between RHEL and Debian based distributions and a continuously rolling distribution. There are distributions that update weekly, biweekly, monthly, quarterly, etc.

RHEL seems to be up to date, the RHEL from May is well supported. I have tested out Alma as vms, but ive never used even fedora or centos in ages.


I will agree that RHEL has gotten better about upgrading software when they do minor releases but I'm still painfully aware of the pre-9.X days when they would release a new version and the software was already a year out of date.

I personally used Fedora for a long time at the same time as I ran Arch Linux on servers. I honestly couldn't really tell the difference as long as I was updating Fedora every time a version bump came out. The release cadence was fast enough that it never caused problems. I ended up switching to it for my home devices entirely. Though now I run SteamOS and CachyOS because they're Arch without the headaches of Arch.


> Arch isnt supported at all by any version

Not officially supported but it works well for many people who run ROCM on it (myself included). As always, documented on Arch wiki [1].

[1] https://wiki.archlinux.org/title/GPGPU#ROCm


The FPGA market has been growing in size despite GPGPU taking off. And clock speed difference is closer to 4-5x not 20x. Despite that and the lower area efficiency of FPGAs, there have been price and power competitive FPGA accelerators cards released over the last 5 years. Sure, you're not going to get an A100's performance, but you can get deterministic latency below 5us for something that the A100 would take a minimum of 50us to process. GPGPU isn't ideal for its current use case either so FPGA based designs have a lot of room to work in to get better, application specific accelerators.


SV requires a linter for literally every single line change that you do because the language is rotten to the core by being based on Verilog. Heck, it has an entire chapter of it's LRM dedicated to the non-deterministic behavior inherent to its description of the hardware. VHDL has no such section because it is deterministic.

Both languages suck for different reasons but no one has figured out how to make a better language and output a netlist from it (yes, there is an open interchange standard that almost every proprietary tool supports).


You're conflating open source IP cores such as LiteX with open source FPGA tooling to try to make the latter look better by using the former. Everyone knows that vendor IP is pretty terrible if you don't use it in the very narrow window in which it's validated and tested. That's why big defense contractors all use either the Northwest Logic or Rambus PCI-e cores on everything prior to Versal.

But at the same time, those cores are big and powerful, and optimize horribly because the customers who actually use them need all of those features. Those customers aren't really concerned with area but rather with meeting performance requirements. Using the Xilinx provided QDMA core, I've been able to achieve line rate performance on PCI-e 4.0 x16 for large DMA transactions with a setup time of about 3 total days of work. I'd like to see an open source solution that could even do that with just ACKing raw TLPs because I haven't found one yet.

As for pricing, AMD/Xilinx and Altera don't want you as a customer. They want to sign $10M+/yr accounts or accounts which push the envelope of what's possible in terms of frequency (HFT). And they price their products accordingly for the public. If you actually end up as a direct customer, the prices are significantly cheaper to the point where those cheaper Chinese vendors don't make sense to use.


I'm in the USA with nominally a 1.25 Gb/s down, 50 Mb/s connection from my cable ISP. And you'd think that it would be fast, low latency, and reliable. Well that would be true except my ISP is Xfinity (Comcast). At least 4 times per week, I experience frequent packet loss that works with older web servers but makes most newer TCP based technology just fail. And the connection will randomly fail for 10 minutes to 2 days at a time and sure they give me a credit for it.

So anyways, I bring this up with my local government in Chicago and they recommend that I switch to AT&T Fiber because it's listed as available at my address in the FCC's database. Well, I would love to do that except that

1. The FCC's database was wrong and rejected my corrections multiple times before AT&T finally ran fiber to my building this year (only 7 years after they claimed that it was available in the database despite refusing to connect to the building whenever we tried).

2. Now that it is in the building, their Fiber ISP service can't figure out that my address exists and has existing copper telephone lines run to it by AT&T themselves so their system cannot sell me the service. I've been arguing with them for 3 months on this and have even sent them pictures of their own demarc and the existing copper lines to my unit.

3. Even if they fixed the 1st issue, they coded my address as being on a different street than its mailing address and can't figure out how to sell me a consumer internet plan with this mismatch. They could sell me a business internet plan at 5x the price though.

And that's just my personal issues. And I haven't even touched on how not every cell phone is equally reliable, how the switch to 5G has made many cell phones less reliable compared to 3G and 4G networks, how some people live next to live event venues where they can have great mobile connections 70% of the time but the other 30% of the time it becomes borderline unusable, etc.


I have AT&T fiber 1Gb/s symmetric in the bay area. It is much better than the XFinity coax/cable from before. I still have random slow down and packet loss though. I suspect the shared fiber is oversubscribed one too many times.


> My previous job made it hard to even submit bugs/fixes to open source projects we used internally. Often we just forked b/c bureaucracy (there's a reason it was my previous job)

The process for me to submit a PR to an OSS project at my last firm took so long that by the time it got through legal review and I was authorized to see it through to the end, it had been 1.5 months and I went from having some free time to work on it at work to being swamped with work doing hardware bring-up and leading a massive redesign project.


> since you can't even report bugs to many of them without an account.

I joined my current employer a bit over a month ago at this point and legal still hasn't authorized me to open a GitHub account or authorized me to use my personal account to report issues.


> legal still hasn't authorized me to open a GitHub account

Damn. You signed a contract that prevents you from ever publishing your own code? I guess everyone in the comments who are against have something to show in public all have security clearance or something. : ^ )


I signed a contract prohibiting doing anything that I'm compensated to do for work outside of work or using information gleaned from my job without permission of the firm. This is a standard contract clause.


Except gas exports are largely being used to retire brown coal burning which is even worse for the environment than LNG. This isn't an all-or-nothing deal even with exports. The richer countries should take on the costs of better efficiency first and we can trickle those technologies down to other nations as they become cheaper than LNG and coal.


Yeah 40% seems like a pipedream. I was at a Fortune 500 defense firm and we couldn't get any cloud provider to even offer us anything close to that discount if we agreed to move to them for 3-4 years minimum. That org ended up not migrating because it was significantly cheaper to buy land and build datacenters from scratch than to rent in the cloud.


There are basically no discounts in govcloud


Defense firms do a lot more than just government work. Also, there are definitely discounts in govcloud when Fortune 500 companies that operate 30+ datacenters start talking to govcloud providers about potentially migrating to their services.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: