Hacker Newsnew | past | comments | ask | show | jobs | submit | svorakang's commentslogin

Well. I have worked quite a bit with Chinese businesses and let's just put like this: It does not matter much whatever license you think the SW or HW has. Whatever is available will be used and modified to the liking of the customer.

It is common for new employees to walk in with code bases of previous projects they have worked with and there is a great deal of administration involved in ensuring that no one else gets to work with more code than they absolutely need. Local builds and copying binary archives is common practice!


> It does not matter much whatever license you think the SW or HW has. Whatever is available will be used and modified to the liking of the customer.

yeah, china is just not playing the stupid licensing and copyright game. chinese companies infringe trademarks and break licenses all the time. doing so boosts their business/economy overall so the state just doesn't bother with enforcing anything.

the annoying thing is that we're not really playing the game either, or at least, we have laws that get enforced "randomly". it was recent news that meta torrented some 80 terabytes of stuff (https://arstechnica.com/tech-policy/2025/02/meta-torrented-o...) and essentially nothing happened. back in the day aaron schwartz was driven to suicide over a way smaller chunk of scientific papers downloaded.


Bourgeois dictatorship vs leninist party state


I hate to break it to you but it’s extremely common for Western employees to do the same. I had to investigate what seemed like a new employee attempting to begin an exfil of our source code only to discover what looked like all of EMC’s core code, the new employee’s last employer, in a gdrive folder.

Incredibly common.


Again, if it’s open source there’s nothing illegal in it.

The point of closed source is that it allows the people doing the research and engineering to recoup and profit off their investments.

Which I’m in favor of.

However, considering currently the disproportionately overwhelming majority of the value is being captured by American firms, and the US has proven to be an unreliable partner, it’s in both the EU and China’s interests to eliminate the profit from this industry entirely in order to promote competitiveness with the US based front runners.


Neither safety now environment is something you can easily wave at like that. Also, you're completely missing security concerns and legislative.

I have worked in the automotive embedded software industry since 2009 and I have got caught in the safety track in my career. It's a strange place to be, because the basics are extremely simple, yet it takes hundreds if not thousands of man-years to get a modern vehicle reasonable safe just in terms of the electrical system (this includes the software in automotive terms). There are so many ways to make a mistake that could easily result in an accident. Even the window regulators have non-trivial implementation concerns for anti-pinch. Allowing a random hacker to override this is a terrible idea. Now imagine what kind of mess you could do with brakes and steering...

Designing a vehicle to be hackable will very likely lead to an unsafe vehicle.

I believe what I just wrote applies similarly for security too.

Furthermore releasing software for the market, extensive testing is carried out by an independent body to ensure that legislation is followed. Even conceivably simple things such as lighting or headbeam alignment is a pretty large problem domain by itself. Also, so is just the communication standards for diagnostics.

I would say that large changes would be required to transform this industry. In some, protected domains there is use of open source, such as Qt/Linux for HMI, but opening the HMI to be fully hackable is unlikely to happen. There is quite some liability to make the HMI non-distracting.


Oh dear, I wonder how I'll ever be able to use the code I wrote over the years that controls uncounted lathes, mills, plasmacutters, lasers and a whole raft of other industrial tools.

Obviously the only people that can be trusted with our safety are the manufacturers, because the people whose lives are on the line are irresponsible madmen.

> Designing a vehicle to be hackable will very likely lead to an unsafe vehicle.

Vehicles are hackable, but they're not documented which makes them more dangerous, not less dangerous. Witness comma.ai and others.


> I wonder how I'll ever be able to use the code I wrote over the years that controls uncounted lathes, mills, plasmacutters, lasers and a whole raft of other industrial tools.

You are knowledgeable enough to make them work. Many aren't. Some can't be. Hacking requires knowledge and skill, and most importantly, being contained. Cutting yourself with your self-programmed hackable laser in your garage is unfortunate, but cutting other people is a disaster you can't afford.

> Vehicles are hackable, but they're not documented which makes them more dangerous, not less dangerous. Witness comma.ai and others.

I see two points here.

1. Security through obscurity is bad. That's true, but we have "business" in the play, so that's how it goes. Maybe push for better regulation.

2. comma.ai, an "autopilot", based on reverse engineering, or as you put it, the base product "not documented", thus makes it "more dangerous". No, it's dangerous not because the base product is not documented, but because there's no real autopilot at the moment, and comma.ai is irresponsibly advertising as being able to "drive for hours without driver action". There are many "black box" products with a ToS that forbid reverse engineering. Does that make the product inherently more dangerous too?

Besides, you seem to suggest that, with open products, people can not make things unsafer. That's not true. Some don't know what they are doing when they "hack" things.


I'm all for open things, but that's a false equivalence. You don't use those tools on a public road around unsuspecting others.

In the same way you can't just merrily hack about with a plane. The FAA don't really care that much if you die in your experiment. They do care if the burning wreckage falls on someone minding their own business.


No, Jacques and I hand those things off to random unskilled laborers who come work at a robot cell or CNC for $15/hr without any real knowledge of the code and safety standards. "Push the green button, trust us, it's safe."

FAA regulations for experimental or kit aircraft are wide open. As are automotive standards, having just helped my buddy get his sand rail DOT certified for use on the road. A tube chassis, '65 Volkswagen rear end and VIN, safety glass and manually-squeezed washer bottle and manually-swung windshield wipers (for the DOT guy, then never used again), un-boosted non-ABS brakes with a cutting brake in the loop, and it's cleared for use on the Interstate if we were crazy enough to do that. It's only the manufacturers who claim they're protecting the public by locking down their designs.


Road approval looks very differently for mass-produced vehicles. It is not an easy thing to pass.


And what makes you think that the current crop of automotive software written in either asm or unsafe C is going to be any better than what you or I would produce? I've had a very recent model Mercedes C-class nearly kill me twice on account of buggy software. So much for that 'stellar' (pun intended) reputation. My current car is as dumb as it possibly could be.

I'd expect that if any ECU software was to be released that we'd finally realize how bad things really are and that there would be a massive amount of work done on making sure these pieces of critical software would be as safe as they could possibly be.

Note that the norm is 'a subset of C deemed to be safe' but that what I've seen of such development would not pass my personal threshold for quality work. In fact, rather the opposite. On the plus side, the hardware people usually know their stuff and realize what is dangerous to pass to the software people so with some luck your vehicle will use an FPGA for any kind of really safety critical stuff (or processors embedded with the relevant hardware, such as ABS and so on).


> I'd expect that if any ECU software was to be released that we'd finally realize how bad things really are and that there would be a massive amount of work done on making sure these pieces of critical software would be as safe as they could possibly be.

toyota/denso michael barr testimony cough

edit: oh, there's even slides now, you don't even have to read the court transcript https://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRU...


Yes, that's one of the better known cases. But I've looked at similar stuff under NDA and I'm more than a little bit uncomfortable with some of the stuff that I've seen.

The scariest kind of developer to me are the ones that believe that they and only they are able to create safe software without outside scrutiny of what they produce.

Of course that scrutiny would have exposed emissions cheating software right on day #1.


This comment chain appears to have a fundamental misconception of what constitutes safe and what does not.

Automotive standards and automotive coding standards approach safety in a different way than most people think (and given your comments I would say this includes you). If you're curious, you can have a look at some rules to evaluate automotive code that are published here: https://github.com/github/codeql-coding-standards

In short, the rules do not aim to eliminate failure or crashes, but rather make the crash predictable and uniform when a crash occurs so that it can be dealt with. This is further complicated by where and how the automotive manufacture chooses to implement safety controls. It is entirely possible to have a bunch of unsafe code running somewhere on a car, and simply have a small safety shim around said code that prevents the unsafe code from impacting the safe operation of the vehicle.

With that in mind, let's take the example that you use here of emissions cheating software. Emissions is likely not considered safety relevant (it might not even be QM, it just might be some code) and so no safety requirement applies to it. So, no real scrutiny would happen there regardless, at least from a safety perspective. See, validating that software passes a particular safety certification is time and money intensive and manufacturers therefore keep the amount of code that they qualify as safe to a minimum. This means as an example that the infotainment systems of many manufacturers are not safety relevant and no safety function should exist on or interact with them.

A few other things to consider from other threads:

- Telsa doesn't necessarily follow or adhere to safety standards. They (Telsa) are explicitly non-compliant in some cases, and this is partially why there are investigations into their practices.

- Industrial robotics code is just as bad if not worse than most automotive software from what I've seen. As you note, its that these robots are not under manual control

- None of this prevents the software from being open source. There are plenty of safety qualified open source projects. This simply limits who can contribute and how contributions are managed. The main reason why many things in automotive are not open source is that the ECU manufacturer isn't interested in doing so, and the Tier 1/2/3 that does the implementation is even less so.


There is a difference between what I with my layman-user cap on would consider unsafe (say, releasing purposefully harmful software into the environment or software with bugs like phantom braking) and what I with my embedded software programmer cap on (machine control, aerospace related) would consider to be unsafe. I spend a lot of my time reading root cause analysis on software failures and my only conclusion is that this is a very immature industry with a lot of players that should probably not be trusted as much as we do.

As for safety shims: for instance, watchdog timers are often used for this purpose, to bring a system that is exhibiting buggy behavior to behave more predictable. Personally I would consider any watchdog error that was not directly related to a hardware fault (say a bitflip) as a failure and I'd like to have my car report such failures.

Tesla being non-compliant is precisely my point: they get away with that because they don't have to open up their code to scrutiny. But I'll bet that if they would that their marketing department would not be able to spin stuff the way they do at present.

All of those would take into account the relevant context and I think that's where things go off the rails here: embedded software developers do not get to claim the high ground around what they consider safe or unsafe given the code from that world that I've had my eyes on. If anything it is amazing that it works as well as it does, the only two industries that get a pass based on my experience so far is medical embedded and avionics. Everybody else has the exact same problems as any other software project and would benefit from opening themselves up to scrutiny.


> Tesla being non-compliant is precisely my point: they get away with that because they don't have to open up their code to scrutiny.

This is an inaccurate assumption. ASIL compliance is something that you can publicly state, and Tesla explicitly does not. Most automotive products that do follow such standards generally state such. (Example: https://www.aubass.com/products/cp_top.html - search for ASIL D)

Making something open source does not in any way make it safe, unless your enforcement plan is having lawyers look into it and in which case you end up with lawsuits and likely endless litigation that results in a closed platform again. Tesla's calculation is that no one will enforce safety controls or standards compliance on them, and to be fair to date they (Tesla) are right.

> Personally I would consider any watchdog error that was not directly related to a hardware fault (say a bitflip) as a failure and I'd like to have my car report such failures.

This is an opinion and does not properly account for the multitude of scenarios that must be dealt with in automotive. Automotive, unlike aerospace, rarely has failover hardware or in some cases even A/B partitions on the disk for failover in software. Add in the need to process signals in real time and you will encounter situations where use of a health check (watchdog timer) is an appropriate response, and the use of one should not need to be reported.

> embedded software developers do not get to claim the high ground around what they consider safe or unsafe given the code from that world that I've had my eyes on.

They (embedded software developers) don't make such claims; when something is safety certified, an external third party does the validation and verification and asserts that an implementation and processes are either safe or not. In the EU this is usually done by TUV (https://www.tuv.com/world/en/) or Horiba-Mira (https://www.horiba-mira.com/).

This gets extremely complex as this is often hard tied to the hardware (support for a safety island, how memory is managed on the SOC, etc) and the overall E/E architecture (selected messaging protocol for the backbone) and layout of the vehicle. Analyzing a system of systems to determine all the possible impacts and make sure that the chance of failure is small enough to be acceptable is a hard problem to solve and not one any single engineer does.


>> Tesla being non-compliant is precisely my point: they get away with that because they don't have to open up their code to scrutiny.

> This is an inaccurate assumption. ASIL compliance is something that you can publicly state, and Tesla explicitly does not.

Sorry, but voluntary standards aren't standards, Tesla is incompliant with ASIL, if they were they'd definitely state that they are so you may as well assume that they're not. Personally I think any party that doesn't bother to state they are compliant should simply not be allowed to ship a vehicle because consumers are not going to be aware of the differences.

> Making something open source does not in any way make it safe, unless your enforcement plan is having lawyers look into it and in which case you end up with lawsuits and likely endless litigation that results in a closed platform again.

It does not guarantee safety. But it does more or less rest on the assumption that over the years at least some safety related bugs would be found and if there is anything that we've learned from open source over the last couple of decades then it is that if you look long and hard enough at even the most battle tested codebases that you will just uncover an ever lasting stream of bugs with the frequency reducing over time.

>> Personally I would consider any watchdog error that was not directly related to a hardware fault (say a bitflip) as a failure and I'd like to have my car report such failures.

> This is an opinion and does not properly account for the multitude of scenarios that must be dealt with in automotive.

Yes, it is my opinion as a software developer of many decades that if you rely on your watchdog timer to keep stuff running besides exceptional cases that you are doing it wrong. Imagine driving on the highway with one of your mirrors wedged against the guardrail for a close analogy of how I see this kind of 'engineering' practice.

A watchdog timer is the equivalent of ctrl-alt-del in case something stops working and while it is better than nothing and should definitely be present because it is still preferable from a system that is no longer responding at all (which is certainly going to be a safety issue) it should not be relied on for normal operation.

> Automotive, unlike aerospace, rarely has failover hardware or in some cases even A/B partitions on the disk for failover in software.

That's a cost decision, and with the cost of computation these days it is also absolute nonsense. A case could be made for this in the 80's but with hardware costing pennies this is simply no longer a valid excuse.

> Add in the need to process signals in real time and you will encounter situations where use of a health check (watchdog timer) is an appropriate response, and the use of one should not need to be reported.

I've been writing real time applications for a very long time and I highly doubt that such situations occur regularly but I'm open to having my mind changed, can you please explain exactly what kind of situation you have in mind where you think a watchdog timer expiring is an appropriate response?

For me a watchdog timer spells: the situation is such that we can no longer reliably function the safer option is to start all over again from a known set of defaults. It says that something unexpected has occurred that causes an operation that should have completed not to be completed and that this is outside of the design parameters that the software was originally specified with indicating that most likely the controller itself is at fault (and not the peripherals that it is attached to).

>> embedded software developers do not get to claim the high ground around what they consider safe or unsafe given the code from that world that I've had my eyes on.

> They (embedded software developers) don't make such claims; when something is safety certified, an external third party does the validation and verification and asserts that an implementation and processes are either safe or not.

Yes. And that process is anything but perfect. I've seen plenty of code that had passed certification that was so buggy it wasn't even funny. Including automotive. In an extreme cases someone thought it perfectly ok to do an OTA update on a vehicle in motion. I kid you not.

So let's not pretend certification is bullet proof even if it is useful it can miss glaringly obvious errors (time pressure, checkbox mentality).

> In the EU this is usually done by TUV (https://www.tuv.com/world/en/) or Horiba-Mira (https://www.horiba-mira.com/).

My experience is limited to the former. Let me recap that: I think their intentions are good but the bulk of the testing is limited to black box rather than in depth review and formal guarantees around performance. This has some interesting effects: it concentrates on the external manifestations of whatever makes the box tick and as long as the test parameters are exhaustive this will work very well. But for any device complex enough that the test parameters are only going to cover a fraction of the total parameter space you may end up with false confidence.

> This gets extremely complex as this is often hard tied to the hardware (support for a safety island, how memory is managed on the SOC, etc) and the overall E/E architecture (selected messaging protocol for the backbone) and layout of the vehicle.

Yes, again, I'm familiar with this and have some (but not complete) insight in how TUV operates when it comes to vehicle and component certification.

> Analyzing a system of systems to determine all the possible impacts and make sure that the chance of failure is small enough to be acceptable is a hard problem to solve and not one any single engineer does.

I think this is fundamentally borked. It will always be time and budget limited. Case in point: I recently reviewed some vehicle related stuff that had already been TUV certified that contained a glaring error in a complex control system, just looking at it from the outside gave me a fair idea of what I had to do to trip it up and sure enough it failed. TUV should have caught that (and the manufacturer too) if they were as safety conscious as they claim to be. I'm not saying that I'm outperforming TUV on a regular basis, I'm just saying that opening up this kind of code to more eyes, especially those that are more creative when it comes to breaking stuff, can - in my opinion - only be beneficial.

Edit: some more thinking about this: I think one of the reasons why I'm quite skeptical about for instance TUV is that in most countries that have large car manufacturers those manufacturers are 'too big to fail' and I would not be surprised at all if TUF (like BaFin) is not in a position strong enough to fail let's say a product line of a major manufacturer even if they find a massive error. It would immediately become a political football and in practice this gives manufacturers a lot of benefit of the doubt with respect to self regulation, besides the fact that such oversight entities are usually understaffed. TUV may well be with the best of intentions but the fact is that VW managed to bamboozle them in a way that any serious code audit including reproducible builds and something to verify that that was indeed what was shipped to customers should have caught.

But I don't see a massive undertaking to put all VW (and other manufacturers') code through the wringer beyond what was already uncovered simply because the only effect that uncovering such a scandal would would be to discredit the German car industry even further. So I don't think anybody is looking too hard.


So, this will be my last response to this thread as I think it's run it's course.

> voluntary standards aren't standards

Most of the worlds standards work this way. They are standards, and it is up to various legislative bodies to decide how to enforce these things. In automotive, compliance with a standard is generally attested to a government and included in the package that is shared with other governments to allow import or sale of the car in their country. Tesla simply flaunts that.

> safety related bugs

This kind of thing isn't a thing if you understand automotive safety, or shouldn't be. You should have sufficient safety controls such that an unsafe condition will not occur. If this is a thing, you're talking about a bug then in the applied safety mechanism that allows an escape.

> watchdog timer expiring is an appropriate response?

Keys for SecOC get out of sync and throw an error. Not a safety problem per say, but your health check (since I consider watch dog timers an implementation of health and state management), you'd trigger a restart of the software to resync the keys.

> pretend certification is bullet proof even if it is useful it can miss glaringly obvious errors

I don't, but when it works it is sufficient. Open sourcing something adds nothing when it works. Importantly, usually TUV assumes liability for things they certify in many cases (not all, but generally that is how it works)

> limited to black box rather than in depth review and formal guarantees

We get the latter at my place from them, so I would poke at this area more if you think its black box only. This likely depends on the contractual terms, and who assumes liability.

> VW managed to bamboozle them in a way

The VW code is likely not safety relevant, so it wasn't reviewed as in depth. Most ECU code also isn't reproduceable even today.

> So I don't think anybody is looking too hard.

On this I generally agree as someone in this space. The amount of money invested in Pwn2Own is small given the barrier for entry: https://www.zerodayinitiative.com/blog/2023/8/28/revealing-t...


> Keys for SecOC get out of sync and throw an error. Not a safety problem per say, but your health check (since I consider watch dog timers an implementation of health and state management), you'd trigger a restart of the software to resync the keys.

Ok, agreed in that case, though I'd prefer to see a forced reset rather than to rely on the watchdog timer as the mechanism to do it for you. You could just jump to the reset vector instead.


I got a friend who was almost killed by traction control. Pulling out of his driveway onto a 3 lane road, and having 2 cars 1 overtaking come around the corner very fast. He tried to punch it to get out of the way but: ”computer says no”. Longest 2 seconds of his life he reckons, waiting for the gas pedal to work again. Both cars had to brake very heavily.

"Safety"


Yes. That sort of thing. I had a 2015 model C-class Mercedes make two fairly serious attempts at killing me before I got rid of it. The first time it could have been a glitch, the second was so obviously borked that I still wonder why they released that stuff on the road. And no way to disable that 'feature'. I'd have loved to analyze what exactly went wrong there and why a bridge and an advertisement by the side of the road were classified as imminent frontal collision.

My present car just does what I tell it to, no gizmos. It may be statistically less safe (I'm willing to believe that) but at least it doesn't actively try to kill me in the name of keeping me safe.


You don’t drive your lathe down the freeway at 80mph. And your lathe probably isn’t surrounded by hundreds of other lathes also going 80mph. And probably not being run by people only half paying attention.

Tell me about the liability laws in place related to you operating your lathe, or the state-required licensing and insurance that each lathe operator holds.

A machining tool is worlds away from a motor vehicle.


Control software for a lathe (or, in fact any industrial robot) is far more dangerous than a car. Go visit any metal workshop and take an inventory of the number of people and then count the number of digits. The fact that they don't cause nearly as many accidents is because they are not normally under manual control. But when they are better find some (preferably solid) cover.

And meanwhile, Tesla gets to beta-test their self driving stuff in public and I can't look at the source code to my ECU to figure out whether or not a certain behavior is by design or an indication that something is broken.


Cars will do a lot worse than reduce your finger count. In the UK, 68 people died in work-related accidents. Nearly 1700 died on the roads. (The same figures in the US are 5k/40k, go figure).

And I agree that Tesla shouldn't be beta testing on the roads. But neither should anyone else.


The main reason for that is that there are a lot more drivers than there are lathe operators, and that lathe operators usually injure themselves.

There is a relatively low chance that you'll find a lathe on a busy intersection or on a freeway during rush hour. I thought that was obvious.


Not necessarily irresponsible madmen; just curious.

Because I bet you if I buy a new car and discover that I can access its internal components via an API, I will be toying with it.

On any other platform that would never be a problem: found a bug? Just restart the container!

But with a car, this might mean a bug in my code manifesting itself while I'm driving 120 kph. And maybe there's a pedestrian crossing the road and I can't stop in time because the bug makes the brake 60% weaker.

This time however, there's not a restart docker button.

I'm sure if this happens people would be attacking Ferreri viciously the way they pile up on Tesla whenever a douche sleeps at the wheel going 100 kph, even though the company said before that that's not safe.


Then we should all go back to security-by-obscurity and trust in the man behind the curtain for computer security as well. And we all know that that doesn't work, so why is there this conviction that the embedded programmers at car companies are made from magic?

It's precisely because cars are so dangerous that the code should be open to scrutiny. And of course - at least in the past - the argument has been made that more eyes do not make the bugs more shallow, but in practice if there is an incentive (such as personal safety) people will expend a lot of effort to figure out why stuff goes wrong.

What it would do is to take away any kind of excuse that manufacturers have in those cases where their gear is suspect to claim that their wares are perfect and that it must have been user error. Because I can pretty much guarantee you that if you were to inspect your average automotive code-base that you'd find errors, and not just minor ones. From accidental erroneous emergency braking, untended acceleration to outright malicious ones such as planned obsolescence drivers, emission controls defeat code and so on.


Open to scrutiny, absolutely. Anything safety-critical should be freely available to those it can harm. Cars, trains, planes, nuclear reactors, lathes, the lot. I hope your code and schematics is fully provided to worker relying on it being correct. I indeed don't have faith in regulators auditing it properly.

That said I still don't want someone to plonk some GitHub code into the brake controllers, take it for a spin and turn me and mine into meat salsa.

On private land, surrounded by informed and consenting people, sure, go nuts.


> That said I still don't want someone to plonk some GitHub code into the brake controllers, take it for a spin and turn me and mine into meat salsa.

The chances of that happening, versus brake fluid contamination, bad lines, seized rotors, rusted rotors, rotors and pads with grease on them and a thousand other mechanical failures are nil. Because brake controllers are always backed up by a mechanical system and the worst thing about a brake is that it could fail.

The bigger problem is that manufacturers that could barely create functional entertainment systems are now actually creating software and hardware combos that can override driver input to the steering wheel and the brakes and in my own experience they are absolutely not qualified to do this. Car software is crap, you can take my word on that. Very, very few manufacturers have software as a core competency.


Please define car software.

You have user facing functions, and you have engine control functions, ABS, transmission, and so on.

The first one, I agree, is generally crap.

For the second one, in a lot of models, your manufacturer haven't even written the code, because they buy it from some OEM manufacturer like Bosch.

And I am pretty sure that Bosch is pretty good at writing this kind of software.


> Please define car software.

The totality of all code running on a particular vehicle that was part of that vehicle when it was sold to the end user.

> You have user facing functions, and you have engine control functions, ABS, transmission, and so on.

Yes.

> The first one, I agree, is generally crap.

Ok.

> For the second one, in a lot of models, your manufacturer haven't even written the code, because they buy it from some OEM manufacturer like Bosch.

That depends. If they buy a whole unit there is a chance it is 'stock', there is a chance that the firmware was modified by the manufacturer or there is a chance that development of the software is insourced. All of that depends on volume, cost, licensing, purpose.

> And I am pretty sure that Bosch is pretty good at writing this kind of software.

Based on what evidence?


The fact that I've been driving cars with ECUs since I was a teenager and never got stranded in the road because of a firmware bug, neither nobody else I know.

Compared with Amazon/Google/MSFT, this is a remarkable feat.


That's fair, embedded developers are a notch above the CRUD folks. Mostly because the hardware people tend to keep them sharp and won't accept any finger pointing unless there is a solid reason for it. But don't overestimate it either, without the watchdog timers embedded usually won't live long.

In general what keeps you safe is assumptions about failure modes that work out as long as everything stays within the set of parameters that define the working envelope of the device. But any combination of inputs that was unforeseen (and therefore not tested) is a possible source of surprises and if and when that happens embedded stuff has very little resilience.

The one piece of code that I wrote myself that had to pass certification for non-flying software that had the potential to crash aircraft and potentially kill people cost me 10x in time from what it would have cost if that were a non-regulated industry. And I'm the first to admit that this was the most humbling experience ever in terms of the number of issues found based on a very simple trick: dual development of the same software by a different set of programmers at some point in the past. Mine was the 'upgrade'. That old software was battle tested but on a platform that was EOL. Before I got mine to the same level of reliability the changelog was many, many pages long compared to my first attempt that I thought would pass the tests. Not so. Not by a long shot.

As for ECU's, they are relatively simple devices in terms of inputs and outputs, they are required to be utterly reliable because when you press the accelerator to cross an intersection those ponies had better be there. But that's mostly a result of many years of work on the software cores and just like any other old piece of code by now we've found most of the bugs.

The older - pre software - engine control units were massive chunks of hardware, look on the right hand side inside the passenger seat of 80's and late 70's era Mercedes and other German vehicles with fuel injection (and some Volvo and Citroen cars) for the kind of control unit that went into these cars. Just the component count and the number of adjustable parameters alone is enough to make you wonder how reliable that stuff is. And yet, work it did. But the modern ones are far more reliable, not because of software, but in spite of software. And even though they generally work the question I would like to see answered is how much of that is because of duct-tape and how much of it is because of solid engineering? Some people in the embedded world are of the opinion that it doesn't matter. But I think it does and I care enough that I'd like to see what makes this stuff tick. On the off-chance that there is a real bug or some unintended acceleration condition or a way to get the hardware to lock up lurking there that we simply haven't uncovered yet.

Given how prone car manufacturers are to stonewalling on this stuff my guess is: plenty of chance that that's the case.


Yes, now I can see your point. Haven't considered this in depth before. Maybe because we have so low expectations in software, when something mostly works without visible manifestations of failure we tend to consider it better quality than actually it was.

But, yes, the absence of failures should not be taken as proof of absence of bugs.


I read a lot of closed source code and quite a bit of it embedded and some of the stuff you find is enough to make you question just about anything.

Here is a nice picture of a pre-digital ECU:

https://www.benzworld.org/attachments/75-djetecu_3-jpg.16183...

It's essentially a hardwired analog computer (note the lack of adjustment points other than 'idle RPM'), as long as the external contacts are clean, the components are good and within tolerance the unit should work. There is nothing to adjust.

Modern ECU's have many more sensors to deal with, so higher data rates and many, many more fault conditions that have to do with emissions and preventing engine damage, these old D-Jetronic units were very impressive for their time, plenty of them are still running today which given the parts count is a small miracle.

In a way they are so reliable because of what they can't do, as long as the input parameters are roughly where they should be and the crank position input is accurate the engine will run. But whether it will run efficiently or not is a different matter and accurately setting these up without the right gear and manuals is next to impossible. So most parties that repair these will do a 'so-so' job, good enough to get the car back on the road but far from optimal. But the old battle-axes that you'll find these units in never were models of fuel efficiency. In contrast to this a modern ECU has exactly zero analog processing beyond digitization, it's all software. And where at all possible that software is not engine (beyond type and pre-set config) or drivetrain specific (though they do tend to talk to the transmission to get more information about the driving conditions) because that would require a lot of extra work per engine.

The most that you'll find is that the ECU has some knowledge of the serial numbers of key components (for instance the drivetrain) to give some anti-theft protection. Other than that it's all auto-configuration until some parameter goes out of spec and then that expensive check engine light comes on.


Why do we even need computers in cars. The woz said never trust a computer you can't throw out a window. The only computer in my car is the radio.

It's just excessive consumerism and marketing crap. It's not needed.


The initial reason was emissions, but now that we have them all kinds of other stuff gets tacked on.


> I believe what I just wrote applies similarly for security too.

Automotive security is nearly an oxymoron. The reasons for that are simple: the difficulty and expense of attacking a vehicle exceeds the bored grad student/curious tinkerer threshold, and the automotive industry has collectively the worst attitude towards security I've ever encountered.

The depressingly predictable result is that third party automotive security testing is a sport reserved for people who are extremely disinterested in disclosing their methods to you, aka the actual attackers.


Why would it (legally) be on the car manufacturer if someone hacks his own car and causes an accident because of modifications to the ECU (firmware)?

This doesn't intuitively make sense to me. At the very least there are probably huge differences between countries when it comes to this?

Aside from the fact that some people would likely love to modify their car in every way possible to use it on the racetrack or whatever private property?


Or maybe to make it safer.


I think their point / the general FOSS argument is that those 1000 of man-years would be turned into 10000 man-years if these things were open sourced. A similar security concern could be waived at things like openssl, but it seems pretty inarguable that openssl is a net-positive for security.


I'm all for open access to the code for the sake of safety. On the other hand, I'm completely against hobbyists accidently bypassing a safety mechanism.

Open access, but secure access to software download could make sense, at least for commodity parts.

When it comes to features with competitive advantage, though, I don't see that OEMs or its suppliers have anything to gain.


> On the other hand, I'm completely against hobbyists accidently bypassing a safety mechanism.

Accidentally.

Besides that: it should be fairly obvious that hobbyists are not going to 'accidentally bypass a safety mechanism', they can cut their brake lines as well and they don't generally do this. What you'd see is that the aftermarket would finally be able to produce stuff without dealers in between and people with the 'right' kind of tooling (authorized by the manufacturer) to get your replacement to be recognized by the firmware. Because of course absolutely none of this would ever be used to protect the bottom line. Right?

Also: if anything open sourcing this stuff would likely result in more rather than less safe vehicles, maybe at the expense of a couple of embarrassments. Because I have absolutely no illusion about the people working on these systems professionally to be somehow magically better than the ones that work on them for themselves, after all, they have a pretty big stake in the outcome.

Imagine that, working on your car in a safety related way... replacing brakes, steering housing components, linkages, suspension components tires and so on is all at least - if not more - risky than working on software.

FWIW one of those 'safety features' tried to kill me twice and caused me to let go of my recent car and switch to a 1997 issue vehicle that has behaved quite predictable compared to that modern one. Whose 'safety features' could not be disabled.


> Besides that: it should be fairly obvious that hobbyists are not going to 'accidentally bypass a safety mechanism', they can cut their brake lines as well and they don't generally do this.

I can already picture the YouTube videos on "how to gain 15% hp" explaining you how to "hack" your car with a 1s "it will severely reduce your engine life expectancy" message at the end. Thousands of people would run this patch without thinking twice

Also how would you pass the controls most countries do every other year on cars ? I don't expect people checking my brake pads to know how to review the random piece of code I deployed to my car


Well, that's sort of the point: this is already possible, so in that sense nothing would change. Changing the mapping (essentially the amount of fuel injected based on a bunch of parameters) is regularly done by 'tuners' (between quotes because they don't really tune anything, they mostly burn more fuel for questionable gains).

But that's really not what I would care about. I'd like to read that code to figure out what the failure modes are and what might impact my safety in a negative way.


I can go out to my late 80s truck truck right now, undo the lock nut on the fuel screw and wind that sucker up. It'll be a completely different truck. Will I half my fuel range? Sure. Will it be fun? Sure will be. If I care about the engine I'll attach an EGT gauge to ensure I don't melt the alloy head.


Hackable does not mean crackable. The best security implementations in the world are free software.

I'm not even a tiny bit convinced that making cars hackable would be a detriment to safety. Give me one example of that happening in literally any other sector.


You might have a point there, but I struggle to find any completely hackable product that is also safety-critical. Some airplane, nuclear reactor or some train, perhaps?


Any old car will do.


And new experimental aircraft, which are owner-built.


Okay, so for the moment leave aside the safety critical bits (only for a moment) - what's the excuse for not opening up the center console? That generally is already segregated and only handles non critical functions.


Center consoles have been used quite successfully as beachheads by hackers to be able to get into more important systems because car manufacturers are typically utterly clueless when it comes to security. So obscurity is a very large part of their security. Of course that doesn't really work with the most motivated parties (car thieves and their captive techies) having a field day with this.

Hyundai and Kia are reportedly so bad that they ended up paying out a large amount of money to compensate owners.

https://www.reuters.com/legal/hyundai-kia-agree-200-million-...

But don't worry, it's been fixed now. Probably.


If it's security critical, it definitely needs to be FOSS and user patchable; obscurity is not a reasonable strategy.


Agreed, but that won't happen until some regulator wises up to this being a way to reduce vehicle theft considerably.


> Even the window regulators have non-trivial implementation concerns for anti-pinch

Tesla just got hit by this a few months back. They had to remove the auto roll-up-windows when you walk away after parking. Apparently they didn't have the sensors or hardware to do it safely.


> Allowing a random hacker to override this is a terrible idea.

It should be a basic right no matter how "terrible" a idea it is. We bought it, we should have full control. Void the warranty or something.


Many of things "should be" and "ought to be" but we all sat around a table and decided to make a neat little thing called the law because at the end of the day we're still just apes and apes don't always act in the best interest of their peers


Do you seriously actually believe the government and The Law has your best interests at heart?


Well no, that's my entire point, it's not about what work best for _me_, it's about what somewhat works when scaled to a whole country or continent. It's flawed in many ways, and corrupted in other ways, but it's better than the alternative

If I really wanted to do what I want whenever I want I'd live in a hut somewhere in the woods. Law is rarely about _individual you_ it's about the society you live in.

Why do I care if you have your seatbelt or your motorcycle helmet ? Because if you get injured you'll cost millions to the community. Same exact principle here, it's part of the social contract, you enjoy a lot of neat things and in returns you give up some other things.


I'm thinking through the ethics of this myself, but I think it's a reasonable argument that you can have the freedom to do what you want with things that you own _assuming you don't impact others_. The issue with a dangerous car is that it puts others at danger.


Responsibility. If I modify my car and that's determined to be the causal factor of an accident, it's on me.


That'll make the hypothetical parents of the hypothetical kid you just ran over very happy I'm sure. "I fucked up but it's on me!" and they lived happily ever after


Yes? Society is actually completely fine with that arrangement. Every single year tons of people get drunk, get into accidents, get people actually killed and are held accountable for it. Yet nobody dares to infringe on their freedom to drink. They tried once and it just didn't work out.

This "give up freedom because kids" nonsense is seriously tiresome. I'm not engaging with these arguments anymore.


Somebody please think of the children.

Really, come on: you can do much better than this. The hypothetical kid might be the one that was saved because some random hacker figured out why the latest generation radar based cars keep phantom braking. That argument works both ways, and typically what is fixed stays fixed.

And of course none of these hotshot programmers would ever be seen near the following bit of code[1], which probably impacted a lot of actual children and their health in a very direct way:

     // in case of an emissions test ensure that we pass
     // with flying colors

     if (emissions_test_detected()) {
         // we're sooo environmentally conscious
         lean_burn();
     }
     else {
         // roll that coal
         regular_burn();
     }
[1] cribbed from VW/Porsche AG's internal repository ;)


Agreed, and that's an argument for open source, not against.


And someone could respond "Okay, fine, tampering with the onboard software voids the warranty and shifts responsibility to the tinkerer." But that's a liability issue. The safety concern is still there regardless of who is held responsible. A change that seems innocuous may, in fact, be breaking safety regulations. This is a big deal and a matter of public concern.


If you clutch those pearls any harder they'll turn to dust.


This akin to parking anxiety, where you never know how long it will take to find a parking in the city. Still, we have cars in the city.

I've had an ev for over five years now and I have so far waited on a charger once, for twenty minutes. And that was at peak hours of the spring break mountain rush (when every swede packs their cars and heads from the populated south to the mountaneous north-west).

On the other hand, I have spent accumulated hours on driving from broken charger to charger that is only for customers to one that I'm not sure if it's in operation yet.

Also hours on just installing different charging apps with the usual registration flow (email, password, verify email, register card, etc).

More chargers and standardized experience (why not just chip and pin?) would be highly appreciated.


Here in Ireland, in the motorway filling stations where the chargers are located, it is very common for me to not only have to wait, but often for 2-3 car to charge ahead of me. Very limited infrastructure here, though.


PS1 was a low volume car, almost concept level. The UX of PS2, a large volume car, is much better and generally very well received.


Also the PS1 isn't an EV, it's a hybrid.


I must be doing something terribly wrong. I've had my bike for 7 years now, use it every second day, in all kinds of weather, and I lube it twice a year at best. It ought to be falling apart!

What's your basis for your claim of such frequent maintenance? Also, what do you mean by a modern bike?


They will still function but a 12spd Ultegra / Force road bike will eat the chain and cassette and lose shifting precision if they're not maintained regularly. And high end road bike cassettes and chains are expensive to replace.

But when I had a basic hybrid bike I similarly just cleaned it every few months at most when it got dirty from wet rides. Worked fine for years.


Automotive embedded C developer here. Most of the code in this industry is implemented in a subset of ANSI C90. The reason is not header files, libraries or linker scripts. The reason is as the grandparent post points out: compiler availability.

For rare targets, there's just no money in making a compiler work for more than this small subset of C. My favorite example is the compiler for a really strange architecture where everything is 24 bits. Char is short is long is a pointer.

Bigger controllers tend to be ARM-based, so it's getting better.

Also Tricore has a big share, but their compiler support is getting better. There was even a rust compiler being announced recently, though I doubt it's using LLVM backend, so it's likely to be behind in terms of features.


> Char is short is long is a pointer.

I thought char was required by the C standard to be 8 bits. If everything is 24 bits, how do you cover 24 bit address space with 8 bit pointers?


> I thought char was required by the C standard to be 8 bits

It's not. (EDIT: it is required to be at least 8 bits though)


Welcome to C! The guns are free, but you can only point them at your foot.

The C standard specifies so much less than everyone thinks.


It specifies so much less, precisely to allow it to be usable on such weird architectures.


I wouldn't really qualify this as a foot gun, but rather appropriate flexibility. 98% of people never encounter it, and the 2% that do are happy that it is that way, and I don't think alternative approaches like having the compiler hide it are appropriate for C.


I thought char was required by the C standard to be 8 bits

Common misconception.

to be at least 8 bits though

At least 8 bits for the latter-day compilers for which the 'standard' was the guide. I have seen C compilers where char was 6-bit or 7-bits matching the underlying hardware. However, those probably never implemented more than K&R or some pre-standardization perversion of it.

Also...as noted a char can be more than 8 bits, pre- and post-standardization. I've heard tell of (at least) 9-, 16-, 24- and 32-bit chars on more or less weird platforms.

At some point I'm sure the standards bodies will do some variation of depreciating 8-ish-bit char's for 32-bit Unicode as the standard representation for text.


My favorite example is the compiler for a really strange architecture where everything is 24 bits

DSP? That's the one example I can think of where 24-bit architectures are in active use (and many of them are not Von Neumann, but Harvard with separate data/program memory.)


> The reason is as the grandparent post points out: compiler availability.

is there no appetite to self-write a compiler for the architecture being used?


"Intelligent life on other planets? I'm not even sure there is on earth!"

Albert Einstein.

(According to this link, which I don't fully trust...

https://www.goodreads.com/quotes/587314-intelligent-life-on-...)


Hah, I fought a battle for months at a Nordic automotive OEM, to try to stop them from overusing "fora", to no avail. They were so convinced that it was the proper word to use, they also used it in singularis, as in "We will bring that up in another fora", often then also referring to a single meeting to be held in the next day. This is only one tale out of many on how corrupted the corporate Scandinavian English can be, and sadly, since being essentially marinated in this parlance, I can probably only detect and reflect of a fraction of the linguistical felonies committed around me, and worse: by myself.


fora eunt domus


In fact, many Chinese sites use numbers instead. Easier to type, easier to memorize.

https://mediaoptions.com/blog/understanding-numeric-domain-v...


What you're asking for is a microkernel os, or at least a generic kernel shim to interface a user space driver.

Example of microkernel: QNX.

Example of userspace driver: fuse.

I'm not familiar with the architecture of OpenBSD, does it run it's drivers in userspace?


>I'm not familiar with the architecture of OpenBSD, does it run it's drivers in userspace?

No it's the BSD without modules.

>Example of microkernel: QNX.

Yes and minix, hurd and TrueUnix64 just some other examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: