> where trace amounts of our patented seeds or traits are present in a farmer’s fields as a result of inadvertent means.
That sounds like a very hollow commitment to me. Who defines what "trace" is. Monsanto?
And what is the normal cross pollination rate from doing nothing. 1% 5%? It sounds like it just means we won't sue you the first year, we'll wait until the second year then sue you.
The practice needs to be banned. It's Monsanto seeds that are spreading their genetics in the wind. If they don't want that, then make crops that can't. If they're unable to, then tough.
Saying nobody within pollination range can grow their own crops anymore once someone nearby purchases Monsanto seeds is absurd.
That's all aside from the fact that patenting things that reproduce still is somewhat of a weak concept to begin with.
Putting an absurd tech spin on it. If you made a robot/machinr that could replicate itself sure patent it. If you made a robot that sent out radio waves and every machine within receiving distance could/would suddenly replicate, you can't sue those owners for "stealing your technology".
The proof is in the pudding. To my knowledge Monsanto has never sued anyone over inadvertent cross contamination regardless of the percentages. The cases where they have sued were farmers who explicitly went out and got Roundup resistant seeds to use with Roundup from unlicensed vendors or in violation of a license they themselves signed with Monsanto.
It has never made any sense for them to enforce it against cross contamination because farmers don't want the seeds if they're not already nuking everything with glyphosate. They either buy F1 seeds every year for the extra yield hybrid vigor gives them or they save seed that's somewhat optimized for their growing conditions.
> Saying nobody within pollination range can grow their own crops anymore once someone nearby purchases Monsanto seeds is absurd.
This is a fantasy you have concocted, not the reality.
> I promise when I’m old, I will not have this attitude.
To my ears this is a hilariously naive statement. It sounds to me to be roughly the equivalent or a 7-year old saying "Adults have boring jobs where they sit at a desk all day. I hate it. I promise when I'm old I'm gonna be an Astronaut or play Major League Baseball."
It's not that they don't mean it, it's that one should make promises about a situation they can't yet understand. While some of those kids probably did end up being astronauts or baseball players 99%+ who made that promise didn't. It turns out that being an adult gives them perspective that helps them realize the reasons they want a desk job even if they don't like it, or for many they actually enjoy their desk job (ex they like to program).
So the same if a million young people all thought similarly, and then magically changed their view when they got there dont promises your going to be the one who will break the streak.
You might turn out to be an astronaut, but many people listening, based on good previous evidence will rightly assume you won't.
Read what you just wrote. You are just declaring a belief, not making an actual point.
Do you expect to learn? Get wiser?
If you do, you will eventually develop wisdom that younger people don’t have yet - or may never get. Younger people find new ways to do many things better, but regress in other ways. Lacking your (and your generation’s common) experiences.
Which is why the only old people who can’t see any real regression are … well I have yet to meet that kind of old person, other than those unfortunate to have dementia.
Also, every new better (or perceived better) way to do things has to reinvent many obvious things all over again. Things many won’t realize were already solved by previous practices. Which takes time.
So meanwhile, regressions.
And there is no assurance that new ways will really be better, after all regressions are addressed. Because it is impossible to see all the implications of complex changes.
Anyone who isn’t aware that the amount of today’s glue code, rewriting of common algorithms for slightly different contexts, the mush mash of different tools, platforms, and dependencies, and all their individual quirks, was a non-optimal outcome…
But the current pain points will drive a new way. And so it goes.
Progress is not smooth or monotonic.
It is a compliment to discount that you won’t also notice. Not a critique.
??? my point is someone who doesn’t know the first thing about me called me naive and made bold claims about my future that I’m certain are wrong; time will tell but there’s nothing of substance to discuss from their comment, hence my reply
you’ve also just said a ton of stuff I don’t disagree with, but I’m not sure what discussion you’re trying to have here
I do regret the time spent reading this article and participating in this comment section; that was naive of me!
> Anyone who learns anything looks back on a naive version of themselves. I remember thinking a lot like you, too.
I'm so glad that for the most part in my early internet days (early 2000s), I was pseudonymous. I tended to have very strong opinions about stuff I had barely just learned and didn't have experience to get nuances. My political opinions have completely flipped and I look back on my young firebrand days and unfortunately see lots of young people repeat the same vapid shit that I believed because I was ignorant but convinced it all followed from simplistic crap ideas I was raised with.
"...it’s one of my least favorite types of people; and that’s precisely my point, old men have been saying society is collapsing since ancient times, yet here we are, with things better than ever"
Which is a pretty strong opinion. Also, pretty much all societies that ever existed have collapsed. When that happens, life generally sucks and lots of people die. I'm not just talking about ancient Rome or Greece, or Easter Island, or the fall of dozens of different empires, or more recently South Sudan or Haiti.
Other people in thread called you naive. I won't insult you like that but just given the statements here, there's a whole lot of familiar-sounding overconfidence that reminds of things I'd have said in my 20s.
> If something falls out of the center of the normal distribution, it's by definition abnormal. Once again, that doesn't make it bad per se. But trying to police perfectly good words just makes people become more antagonizing to the position you want to defend.
I mean why do people even post something like that? It takes 2 seconds to look up the definition of abnormal. It's it really not knowing, it's is it (what I believe) trying to sneak in their moral judgements behind a veneer of supposed "neutrality"?
> Abnormal - deviating from what is normal or usual, typically in a way that is undesirable or worrying.
> "[...] is it (what I believe) trying to sneak in their moral judgements behind a veneer of supposed 'neutrality'?"
Yes, that's precisely what it is. Moral judgements based on outdated ("conservative", especially clerical) understandings of the world, wrapped in some delusional sense of "objectivity". Only the scientifically and philosophically illiterate fall for it. In German, we call it Bauernfängerei (swizzling, duping; lit. "pawn catching").
> Giving more money would bring in better teachers, but I don’t know that they’d be able to overcome the other obstacles
Start with the easiest thing to control? Of giving more money and see what it does?
We seem to believe in every other industry that to get the best talent pay a high salary salary, but for some reason we expect teachers to do it out of compassion for the children while they struggle to pay bills. It's absurd.
Probably one of the single most important responsibilities of a society is to prepare the next generation, and it pays enormous return. But because we can't measure it with quarterly profits we just ignore it.
The rate of return on providing society with as good education is insane.
Just own goal, after own goal, after own goal by the current US political party in power.
While China proceeds to lap us doing what they party insisted was economically infeasible if not impossible. Investing in cutting edge technologies, gaining energy independence, and leading the world in manufacturing growth.
And instead of investing in the future, building better products and tech, and out competing the world, all we've heard from the past decade+ is protectionism for oil interests and tariffs for industries.
Such backwards thinking, while China is an existence proof of what they claimed couldn't happen.
Such lost opportunity for no good reason at all. This country is going to hate being so weak in the international stage, but it's almost inevitable given its domestic choices.
CH4 + 2 O2 -> CO2 + 2 H2O - 890kJ/mol
C + O2 -> CO2 - 393kJ/mol
The relevance of those equations are that CH4 is methane, the principle component of natural gas, C is pure carbon, the principle component of coal and the USA has been transitioning it's energy production from coal to natural gas.
The above equations say if you produce the same amount of electricity with natural has, your CO2 emissions halve. That is the driver of the reduction you point to. It is nice to see, but halve is the best that can happen. Meanwhile if China's continues down it's current path, their CO2 emissions for electricity production will drop to 0.
According to the EIA, the USA has about 18 years of reserves of natural gas at current production rates [0]. The USA has about 70 years of reserves of coal, so this transition to using natural gas is temporary. After about 100 years the USA will run out of both. If you want to see what that looks like, look at the UK.
China can get to 0 emissions with their approach - replacing coal with solar and batteries. The US cannot do the same with theirs - replacing coal with natural gas.
The United States outsourced the bulk of its consumer production to China and the latter's emissions are still falling. You don't find that impressive?
He is so blantly not a conservative. So with a 41% (tho falling) approval rating, the bigger question is: did those who claimed to believe in it really give up their principles so easily? Or never really believe in them to begin with?
Interesting, are extension methods a backdoor way to typeclasses in C#? If so that'd be fascinating to see.
If the extension method implements and interface, and any existing type can be extended via that interface it seems like you get that functionality right? Or am I missing something?
The mock discussion still misses the real solution, which is to refactor the code so that you have a function that simple reads the file and returns json that is essentially a wrapper around open and doesn't need to be tested.
Then have your main function take in that json as a parameter (or class wrapping that json).
Then your code becomes the ideal code. Stateless and with no interaction with the outside world. Then it's trivial to test just like and other function that is simple inputs translated outputs (ie pure).
Every time you see the need for a mock, you're first thought should be "how can I take the 90% or 95% of this function that is pure and pull it out, and separate the impure portion (side effects and/or stateful) that now has almost no logic or complexity left in it and push it to the boundary of my codebase?"
Then the complex pure part you test the heck out of, and the stateful/side effectful impure part becomes barely a wrapper over system APIs.
Funnily enough, I am preparing a simple presentation at work to speak about exactly that. The idea of separating "logic" from I/O and side effects is an old one and can be found in many architectures (like hexagonal architecture). There is plenty of benefit doing this, but testing is a big one.
It should be obvious, but this is not something that seem to be thought in school or in most workplaces, and when it is, it's often through the lens of functional programming, which most just treat as a curiosity and not a practical thing to use at work. So I started to teach this simple design principle to all my junior dev because this is something that is actually quite easy to implement, does not need a complete shift of architecture/big refactor when working on existing code, and is actually practical and useful.
> Then the complex pure part you test the heck out of, and the stateful/side effectful impure part becomes barely a wrapper over system APIs.
In practice the issues I see with this are that the "side effect" part is usually either: extensive enough to still justify mocking around testing it, and also intertwined enough with your logic to be hard to remove all the "pure" logic. I rarely see 90-95% of functions being pure logic vs side effects.
E.g. for the first, you could have an action that requires several sequenced side effects and then your "wrapper over APIs" still needs validation of calling the right APIs in the right oder with the right params, for various scenarios. Enter mocks or fakes. (And sometimes people will get clever and say use pubsub or events for this, but... you're usually just making the full-system-level testing there harder, as well as introducing less determinism around your consistency.)
For the second, something like "do steps I and J. If the API you call in step J fails, unwind the change in I." Now you've got some logic back in there. And it's not uncommon for the branching to get more complex. Were you building everything in the system from first principles, you could try to architect something where I and J can be combined or consolidated in a way to work around this; when I and J are third party dependencies, that gets harder.
You're right that it's not always easy to separate pure from effectful code. But you definitely can (model it as a state machine) and I think it's worth it, especially if those steps can take a long time, can fail, might be executed in parallel, etc.
For instance, I once worked on payment-related code at a large online retailer. The steps I and J from your example would have been calls to the payment gateway's API (payment initiation, actual payment request). There was also a step K (polling for payment confirmation) and even a step K' (a payment confirmation callback the gateway might or might not call before or after we get around polling for the payment status ourselves). And often there was even user interaction in between (the 3DS/3DS2 credit card payment scheme that's common here in the EU). Every single one of those steps could fail for a myriad of reasons, e.g. time out, be rejected, … and we had to make sure we always failed gracefully and, most importantly, didn't mess up our payment or order records.
Of course this was an old enterprise Java code base, created by people who had long left the company, and all this had been written just the way you imagine it. It was an absolute mess.
Every single time I worked on this code base I secretly wished one of the original authors had heard of state machines, pure vs. effectful code, and unit tests.
The risk to that approach is that you end up writing code that cannot deal with the real world problems of I/O, such as timeouts, failed reads, jitter, and other weird behaviour.
Separating I/O from logic makes a lot of sense and makes tests much easier to write and code much easier to reason about, but you'll still need to implement some sort of mocking interface if you want to catch I/O problems.
> how can I take the 90% or 95% of this function that is pure and pull it out, and separate the impure portion (side effects and/or stateful) that now has almost no logic or complexity left in it
They addressed this concern already. These are not contradicting approaches.
I think the disconnect is that the impure portion often has substantial amounts of inherent complexity. If you're working at a high level that's handled by a lower level (OS, framework, etc.) but somebody is writing that framework or OS or HAL. And they need to test too, and ultimately someone is going to have to make a test double of the hardware interface unless all testing is going to require hardware-in-the-loop.
I've had great success swapping in a in-memory database via dependency injection and just running 100% of the application, end to end.
in the ideal case my tests start by writing some randomised data using the external API, I then update it(if applicable) using the external API, and finally read it, also using the external API, and compare the actual result with what I expected.
I use randomised data to avoid collisions with other tests, which might cause flakiness and/or prevent running the tests concurrently. I avoid having seed data in the database if at all possible.
It's the only approach I've found that can survive a major refactor of the codebase. Anything short of breaking the external API, which is typically a no-no anyway, shouldn't break these tests.
Doing a refactor and being able to rely on the test suite for finding bugs and inconsistencies is amazing. Of course they won't find 100% of all bugs,but this way at least you know that a failing test means there's a problem in your production code.
Thou shalt never mock the database, for thou shalt anger the database when thou moketh it.
In all seriousness, I have found this to be a useful suggestion, because the purpose of a test is to make sure invariants don't break in real code. When you mock the database, you're excluding large amounts of real code from test.
I think the important thing is that the code look pure from a testing perspective.
Say you've got a function that accesses a key-value store. Ideally, you can factor out the i/o so that you do all your reads up front and all your writes at the end, leaving a pure function in the middle. But if the code is too tangled up in its side effects for that, the next best thing is to create a fake KV store and then wrap the function like this:
Dependency injection frameworks can be a pain, but basic dependency injection (e.g. pass a DB handle to anything that needs the DB) is a must. What's the alternative? Having everyone independently create their own DB connections?
It's my preference to a point. I think you can over-do it. I've worked in systems that didn't use it and worked just fine. I would bet most folks err on the side of too little, unless they have some kind of framework doing the heavy lifting.
I agree with you, however convincing an entire team of devs to explicitly separate the interface of impure parts of code is very difficult.
If you introduce a mocking library to the test portion of the codebase, most developers will start to use it as a way to shortcut any refactoring they don't want to do. I think articles like this that try to explain how to better use mocks in tests are useful, although I wish they weren't necessary.
I always liked the phrase 'Hoist your I/O' [1] but yes, you can only hoist it up so many times until its outside of your application completely (making it completely pure, and now someone else's responsibility).
You still need to write a test for how it all comes together and you should write tests for your error handling. You need a mock to respond with an error.
Tests help make sure that the intent is to just bubble up the error.
What if someone in the future is comes in and modifies that to add some complexity somehow or changes it to log and continue. Tests will catch that behavior
It's rarely possible to future-proof a function by testing its entire range of possible inputs, so I'm not particularly concerned about trying to anticipate trivial cases that might become non-trivial in future—that just feels like a special case of future-proofing.
I think it's more important to ensure that, after each new commit, every non-trivial code path in that function is exercised. If someone adds new logic, they're responsible for adding new tests to cover that logic; similarly, if they turn a trivial code path into a non-trivial one, they're responsible for adding tests to cover that too.
That sounds like a very hollow commitment to me. Who defines what "trace" is. Monsanto?
And what is the normal cross pollination rate from doing nothing. 1% 5%? It sounds like it just means we won't sue you the first year, we'll wait until the second year then sue you.
The practice needs to be banned. It's Monsanto seeds that are spreading their genetics in the wind. If they don't want that, then make crops that can't. If they're unable to, then tough.
Saying nobody within pollination range can grow their own crops anymore once someone nearby purchases Monsanto seeds is absurd.
That's all aside from the fact that patenting things that reproduce still is somewhat of a weak concept to begin with.
Putting an absurd tech spin on it. If you made a robot/machinr that could replicate itself sure patent it. If you made a robot that sent out radio waves and every machine within receiving distance could/would suddenly replicate, you can't sue those owners for "stealing your technology".
reply