Hacker Newsnew | past | comments | ask | show | jobs | submit | glenstein's commentslogin

Agreed! If anything, I think I'm tired of the "everyone says this when they get old!" hot take. Sometimes things really do get visibly worse and the intergenerational complaining about it is due to it really happening.

I bring this example up every time, but I'm a baseball fan. And seemingly every generation of fan has said there's more people striking out than there used to be. Is it because there part of getting old as a baseball fan? No! It's really happening. Strikeouts have quite literally been going up from one decade to the next for basically a century.

So sometimes it's because the thing is really happening. Environmental ecosystem collapse? Real. People having shorter attention spans every next generation? Real! Politics getting worse? Well, maybe the 1860s were worse, but the downward trajectory over the last 50 years seems pretty real. Inefficiency of increasingly automagic programming paradigms? Real!

Sometimes things are true even if old people are saying them.


In my opinion, we're possessed by a cultural epidemic of think pieces doing rich and nuanced science history, but wrongly framed in the form of correcting "myths" that, in their substance amount to quibblings over narrative emphasis. It's easy to get taken in by the framing because it truly is enlightening, and the argument goes down so smooth because its embedded in a rich, curious, and fascinating scientific history that otherwise embodies best practices I would happily celebrate.

But the key details about the story of penicillin are that a moldy plate showed bacteria-free clearing, Fleming saw it, isolated the mold, proved its germ-killing filtrate and published the finding, which is the heart of the story and which is not a myth.

I'm sure it's true enough that St Mary's windows were usually kept shut to keep pathogens in and contaminants out, that London's August 1928 cold snap would have slowed staph growth, that Fleming's first notes Or 8 weeks later than the actual event, and that a modern plate seeded with bacteria first will not produce the celebrated halo unless the mold is given a head start. The article makes much of the fact that today’s researchers cannot reproduce the famous halo if they add staph first, yet that difficulty rebuts a sequence Fleming never claimed to have used.

These points are significant, even fascinating, yet the article inflates them into a strobe-lit "MYTH" banner, turning normal human imprecision about times and temperatures into evidence of wholesale fiction, which abuses the ordinary friction of any retrospective account and punishes the story for the very human messiness that makes it instructive.

The window quibble, the incubator gap, and the replication protocol do not touch the central, uncontested fact that chance contamination plus observational curiosity gave medicine its first antibiotic.


It's a myth in the most literal way. Fleming published and promoted his results despite a lack of reproducibility. By the time he won the Nobel Prize, he had backformed or misremembered a folksy story about an open window. That's textbook mythmaking.

It can both be fine to have a glib story to tell schoolkids and important to recognize that the actual intellectual process is messier and more complex.


I have now actually read Flemming's 1929 manuscript that first described penicillin [0]. It is a careful and well documented scientific report describing the action of penicillin on various species of bacteria, how to produce it, and some of its chemical properties. It describes how penicillin can kill bacteria isolated from the throats of nurses, and shows that it has low toxicity in mice, and is possibly safe for use in humans: "Constant irrigation of large infected surfaces in man was not accompanied by any toxic symptoms, while irrigation of the human conjunctiva every hour for a day had no irritant effect."

It is far from having a "lack of reproducibility" and in fact allowed others to quickly and accurately replicate his discovery.

https://pmc.ncbi.nlm.nih.gov/articles/PMC2048009/


The path to his discovery may have been difficult to replicate, but the fact that the mold could kill other bacteria was not, and was immediately replicated.

It just wasn't seen as relevant because, at the time, few people imagined its internal use in humans and it was instead seen more as a tool for other microbiologists and the like. The jump to "And then I see the disinfectant, where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning?" took quite some time.


> The window quibble, the incubator gap, and the replication protocol do not touch the central, uncontested fact that chance contamination plus observational curiosity gave medicine its first antibiotic.

This is the same conclusion as the article. IMO, the importance of challenging the myth is that it has hisotrically taken precendence over your (and the article's) conclusion.

FTA

> Fleming’s 1929 penicillin paper may have been written as a linear process, but that’s almost certainly not how the discovery occurred. And by eliminating these complicated twists and turns, Fleming inadvertently obscured what may be one of the most important lessons in scientific history: how combining a meticulous research program with the openness to branch out into new directions led him to Nobel Prize-winning success. Neither rigid plans nor the winds of chance are enough on their own; discovery requires both.


I think that the author had the conclusion wanted before picking the story that supported the desired conclusion as best. To me that story overlooks too many documented facts as well as human behavior. They complain that it requires lottery odds for the first story to happen while ignoring that one win is documented - there was a cold snap exactly when Fleming we t on vacation. Both stories require the winning odds of the mold contaminating a culture - the mold wouldn’t have needed to be identified if Fleming was deliberately experimenting with a known mold from his colleague. So the only undocumented luck left would be the use of that contaminated culture just before vacation.

And which is more likely - Fleming imagining the initial discovery happening right as he returned from vacation or that he remembered those important details but forgot more minor ones?


> The window quibble, the incubator gap, and the replication protocol do not touch the central, uncontested fact that chance contamination plus observational curiosity gave medicine its first antibiotic.

Personally for me, while less important, I really appreciate the investigation into the narrative.

I agree that the science is more important and the results don't care about the story.

The balance is that we don't need to go around correcting everyone, but knowing more about the details of the story is worth my time in reading this piece. I think the article strikes the right tone.


To be anal about being anal, the article doesn't preclude Fleming's account. It argues that it's unlikely, but countless highly improbable things are happening every second. On this topic somehow Ancient Egyptian poultices (and in cultures onward - though they are the oldest recorded account) even used moldy bread to treat bacterial infections, somehow stumbling onto genuine antibacterial aspects for an absurdly counter-intuitive treatment that has a real effect. However it was initially discovered back then, let alone replicated and confirmed, must have been through an unimaginably improbable series of events. Yet it happened. That's rather the story of humanity.

Definitely worded for clicks, but remember that "myth" doesn't mean "false story", it just means "story".

Myth absolutely implies false (or at least exaggerated) story elements. If it wasn’t, we’d call it history.

And you can tell - ‘the history of penicillin’ implies a very different thing than ‘the myth of penicillin’ eh?


That's an incorrect conclusion to draw, though.

A mythology is just a system of stories and beliefs. Nationalism, religion, fandom, etc. All mythologies based on a shared set of stories. Some are more true (and/or complete) than others, but that's not the important part of being a mythology.

Note also that we do call mythology "history". It's just a matter of where you inhabit on the contextual spectrum. I'm sure you can think of several trivially-refuted examples, and I'm also sure that you realize that your preferred narrative is equally refuted by others.

You're correct of course, and they are not. But also the converse.


It’s literally in the definition of the word.

Mythology is also separated from history in the study of it, at least in the west, because history is required to be based on independently verified facts.

Mythology clearly is not.


You are putting way too much faith in the knowability of a historical thing, at the level of detail required to construct and cohere a narrative.

That's convenient, if your preferred mythology matches up to your culture's mainstream accepted view of history. But it is not definitionally more correct.

This is the cognitive bias/belief side of Gell-Mann amnesia, as held by a culture.

I am not discounting the possibility that some mythologies are more true, or more supported, or more plausible, than others. Just that it's not a requirement or a disqualification.


Are there gaps filled? Sure. Is it perfect? No.

Is there still a fundamental difference between what we ‘know’ about Zeus and Caesar?

You bet.

If for no other reason than one begs to be challenged with evidence (and has mechanisms to vet/change, etc!) and the other is the opposite.


> Is there still a fundamental difference between what we ‘know’ about Zeus and Caesar?

Yes!

But both narratives are mythologies.

This is an interesting example. What we think we know about Caesar has lots of documentation (much of it written by himself), and the correspondence to truth for a great deal of it, is absolutely unknown.

I'd agree that the Caesar mythology is clearly more truthy than the Zeus mythology. But that doesn't make it any more of less of a mythology.


It absolutely does make it less of a mythology. Because one meets the definition of a mythology, and the other one doesn’t.

As soon as simple, non-interpretable, facts get fleshed out into a story, you have a mythology. All story writers are fallible and often have an agenda.

Caesar was born to a wealthy family, did lots of things, led armies, wrote lots of words, crossed a river, challenged some ideas, offended some people, got killed by a mob.

Add any more detail to that, and you're building a mythology. It doesn't matter if it's true. Every single one of those "facts" has a mythology attached to it, and the whole system of stories is the Caesar mythology.

You picked Caesar, which I think is an interesting but easily-debunked choice. Any historian of the era would agree that there's an enormous mythology built around Caesar's life.

Something simpler and more recent? Would you say there's no mythology around the Kennedys or the Clintons? (There is so much mythology around those people it's hard to imagine an argument otherwise).

Anywhere there's a collective interpretation of a narrative, there's a mythology.

(It might be important that "collective interpretation of narrative" is also a reasonable definition of "history", and that any interpretation is a likely introduction of error or at least lossy compression -- which is the same thing.)


Caesar existed, we can prove it, and we have various bits and pieces of evidence too about numerous facets of that.

Zeus didn’t, and in fact that is clearly acknowledged.

You’re building a giant vague narrative about something which is not vague at all.


That's a trap.

We know that Caesar lived. We probably also know that Jesus lived. They are both real people (probably) and also icons.

Zeus did not live, as we understand him, but the notional Zeus is derived from earlier prototypes and it's even possible that there was a real human behind proto-Zeus as there is behind notional Jesus. There is more documentation behind Caesar, of course, but the full truth of "Caesar" the icon is equally unknowable.

All of these characters have extensive mythologies about the details, intentions, and meanings of their lives.

Many of those details are unverifiable-but-critical parts of their stories, and the purposes those stories serve. Even the verifiable points of truth have mythologies around their meaning.

This may devolve into a question of what is knowable and what is unknowable. Facts are very very important, but they are not the most powerful force in society.

If you want to argue that you are a scholar of the early Roman Empire, and you know everything there is to know about Caesar, that's fine -- but the scholarship changes, and all you are saying is that you can recite the most-accepted-today interpretation of sparse facts. Interesting, but not authoritative.

They say that history is written by the victors. Caesar was a victor until he was not. They do not say that history is truth.


Let me guess - English lit or polsci major?

Close. Double EE and CS, with a Philosophy minor.

I understand the confusion though.


Then you should know - your argument is burying the useful word distinction and meaning in a not-completely-incorrect, but not-useful-in-reality wrapper.

Yes, there is always a degree of myth in common usage. Yes, there is always an element of distortion in history. And the further we go in time, the more history turns into myth as actual evidence gets lost, buried, destroyed, manipulated, etc. And history gets manipulated for political purposes.

But there is a giant valley between the common case of myth, and any formal study of history. For real reasons, because there are fundamental differences in approach and goals. Think 90/10 one way, 10/90 the other way (hopefully).

Because any study of history which isn’t solidly based on actual independently verifiable facts or evidence isn’t history, it’s just myth making. And those are different words for real reasons. Including trying to fight manipulation and BS. Something that is really important for us to remember, especially now.

Because at some point, something either did actually happen or not, and for folks who care, that matters. If people don’t, that’s fine - but let’s not confuse anyone by confusing the resulting cool stories with actual history eh?


There are many aspects of what we call "history" that are guesses, best guesses, biased guesses, ignorant assumptions, intent-driven narratives, or uncontested acceptance of plausible but currently-uncorroborated narrative.

It's a mistake to pretend that any "historical record" is not littered with these things. More so as the years tick by, more so if there's any contention around influence or meaning of the person/story. (Aside: This is why I will not watch Oliver Stone movies.)

There is (usually) a bright line between truth and falsehood. We almost never know where that line is.

Sure, some details are documented and plausible. These are probably true, unless there are details that are very important or useful to some group.

And some details are incredible and fanciful. Probably "false" although possibly derived from some kernel of truth or substantively true but simplified/embellished for audiences over time.

The deeper you dig, the more you realize that all stories are incomplete at best, and likely several layers deep in manipulation or at least imperfect interpretation. We are all imperfect interpreters.

Do I believe that Caesar crossed the Rubicon in 49 BC? Yes -- with very high certainty.

Do I believe that he uttered Iacta Alea Est just before doing so? Maybe -- it's romantic and flatteringly displays his executive strength, and IIRC he wrote that he did.

If you want to delineate "history" from "myth" on the "truthy-falsy" spectrum, you may choose to do so with that example. But this history and mythology are inseparable, so the distinction is untrustworthy and often not useful anyway.


Accuracy != precision.

myth /miTH/ noun 1. a traditional story, especially one concerning the early history of a people or explaining some natural or social phenomenon, and typically involving supernatural beings or events. 2. a widely held but false belief or idea.

You can redefine words if you want, but don't pretend your definitions are useful for clear communication with others.


You can find definitions that suit your needs if you like, but it does not change the meaning of the word.

A mythology is not necessarily untrue, although they do have a tendency toward expansiveness, which has a coincident trend toward favored interpretations.

But the ultimate truthiness (if it can be measured at all) is neither requirement nor disqualification, vis a vis being a mythology.


A myth != mythology. Other people were discussing myth, you started talking about mythology. The words of course have a common origin.

I would say it's important to hold off on the moralizing until after showing visible effort to reflect on the substance of the exchange, which in this case is about the fairness of asserting that the detection methodology employed in this particular case shares the flaws of familiar online AI checkers. That's an importantly substantive and rebuttable point and all the meaningful action in the conversation is embedded in those details.

In this case, several important distinctions are drawn, including being open about criteria, about such things as "perplexity" and "burstiness" as properties being tested for, and an explanation of why they incorrectly claim the Declaration of Independence is AI generated (it's ubiquitous). So it seems like a lot of important distinctions are being drawn that testify to the credibility of the model, which has to matter to you if you're going to start moralizing.


>Doesn’t anyone think this is really, really bad idea?

I mean I do. And you do. Probably a lot of people in this thread. I felt that way about Netflix doing it, but they did and the world just moved on.

I think you're right that these ads will be, in a sense, worse, but not by the metrics that matter to OpenAI.


Netflix never introduced ads in the ad free service. They introduced a new lower tier price with ads that if you were an existing customer, you were none the wiser.

> They introduced a new lower tier price with ads that if you were an existing customer, you were none the wiser.

You're right that I didn't experience them myself, but my data here are (1) Netflix evidently getting a lot of takers and making a lot of money from people using this new with ads tier, and (2) the lack of any sustained negative outcry against Netflix after the first news cycle or two.

So I'm intending to rely on that rather than my own experience. OpenAI has any number of permutations of ways to include ads, including a Netflix style cheaper paid tier, so I don't necessarily think a distinction holds on that basis, though you may be right in the end: it's more intuitive to think OpenAI would put them in the free version. Though it's possible the Netflix example is teachable in this case regardless.


And then increased prices so that the ad-based one is close to what the ad-free one was 2 years earlier.

But yeah, they didn't migrate existing customers and kept the no-ads option. Those are relevant.


Unlike Amazon Prime Video…

> about Netflix doing it, but they did and the world just moved on

I think the main challenge here is that Netflix works around one of many ways to access entertainment. So if one service starts to show recommendations in that limited context of user data they collect - it's still has negative potentials but it's easier to regulate and there are alternatives.

In the case of LLMs, we have service that are aiming to replace both the browser and the search engine. This means ending up in a situation where your entire access to knowledge and the world takes place via "AI". And the result is: ad-infused, tweaked to align with investor priorities, censored by the current politics of wherever the company is based service machinery that's constantly extracting personal information so it can learn better ways to refocus its priorities. I've read and seen a lot of sci-fi and dystopian history novels (actually read, not LLM-summarized for me) to know this is a very end-game kind of situation.


>In the case of LLMs, we have service that are aiming to replace both the browser and the search engine

Most people already experience the internet as an integrated browser+search engine (and often, OS) experience from a single advertising company, Google, and it has been this way for over a decade.

>And the result is: ad-infused, tweaked to align with investor priorities, censored by the current politics of wherever the company is based service machinery that's constantly extracting personal information so it can learn better ways to refocus its priorities.

Exactly.

This is not to say I like this outcome, but how is it not massive hyperbole to invoke apocalyptic sci-fi? I expect we'll plod along much as before: some people fiercely guarding their personal info, some people taking a "privacy is dead anyway" approach, most people seeing personal computers as a means to some particular ends (scrolling social feeds and watching Netflix) that are incompatible with thinking too hard about the privacy and information environment implications.


Apocalyptic scifi isn't the same as dystopian scifi. Some of the billionaires backing AI literally have dystopian scifi as a goal, they just intend to do it better so that it doesn't seem so bad.

I only connect my smartphone to data about three or four times a year, and then only to update some apps or check on an internet outage. It is becoming more difficult to do this as the alternatives to a connected smartphone disappear. The same will become true with the rest of personal info (such as biometrics). More and more the only alternatives will be your latter two.


Netflix has proprietary content among the licensed content.

>Now I’m building the infrastructure so this can scale.

May I ask what counts as infrastructure for the kind of symbiosis you have in mind? You noted that human interaction is enough of an interface, but toward the end it seems like you have a bigger picture in mind.


Unfortunately, if you look at what this person has been posting all week, it seems he's previewing the core additions to DSM VI when it comes out.

Not that we would literally do this with Voyager, but it makes me wonder at the potential utility of a string of probes, one sent every couple of [insert correct time interval, decades, centuries?], to effectively create a communication relay stretching out into deep space somewhere.

My understanding with the Voyagers 1 and 2 is (a) they will run out of power before they would ever get far enough to benefit from a relay and (b) they benefited from gravity slingshots due to planetary alignments that happen only once every 175 years.

So building on the Voyager probes is a no-go. But probes sent toward Alpha Centauri that relay signals? Toward the center of the Milky Way? Toward Andromeda? Yes it would take time scales far beyond human lifetimes to build out anything useful, and even at the "closest" scales it's a multi year round trip for information but I think Voyager, among other things, was meant to test our imaginations, our sense of possible and one thing they seem to naturally imply is the possibility of long distance probe relays.

Edit: As others rightly note, the probes would have to communicate with lasers, not with the 1970s radio engineering that powered Voyagers 1 and 2.


What you are describing has been proposed before, for example within context of projects like Breakthrough Starshot. In that the case the idea is to launch thousands of probes, each weighing only a few grams or less, and accelerating them to an appreciable fraction of the speed of light using solar sails and (powerful) earth-based lasers. The probes could reach alpha centauri within 20-30 years. There seems to be some debate though about whether cross-links between probes to enable relaying signals is ever practical from a power and mass perspective vs a single very large receiver on earth.

Indeed. I think the main reason to send thousands of probes is increasing the odds that they will survive the trip and also be in the right position to gather usable data to transmit back.

Also once you have created the infrastructure of hundreds or thousands of very powerful lasers to accelerate the tiny probes to incredibel speeds, sending many probes instead of a few doesn't add much to the cost anyway.


Sun as a focus lens. "Just" 500 AU.

The Voyager can be overtaken in several years if we to launch today a probe with nuclear reactor powered ionic thruster - all the existing today tech - which can get to 100-200km/s in 2-3 stages (and if we stretch the technology a bit into tomorrow, we can get 10x that).


For anyone interested, this is approximately the wait/walk dilemma, specifically the interstellar travel subset: https://en.wikipedia.org/wiki/Wait/walk_dilemma#Interstellar...

I was listening to an old edition of the Fraser Cain weekly question/answer podcast earlier where he described this exact thing. I think he said that someone has run the numbers in the context of human survivable travel to nearby stars and on how long we should wait and the conclusion was that we should wait about 600 years.

Any craft for human transport to a nearby star system that we launch within the next 600 years will probably be overtaken before arrival at the target star system by ships launched after them.


I guess there's a paradox in that we'd only make the progress needed to overtake if we are still launching throughout those 600 years and iteratively improving and getting feedback along the way.

Because the alternative is everyone waiting on one big 600-year government project. Hard to imagine that going well. (And it has to be government, because no private company could raise funds with its potential payback centuries after the investors die. For that matter, I can't see a democratic government selling that to taxpayers for 150 straight election cycles either.)


We can get lots of iterative practice on interplanetary ships, so not much paradox there.

And the research doesn't need to be anywhere near continuous. It's valid to progress though bursts here and there every couple decades.

And a lot of what we want is generic materials science.


Yes, my understanding is that the 600 year figure was arrived at assuming that there is iterative progress in propulsion technology throughout the intervening years. But at the end of the day, it is just some number that some dude on YouTube said one time (although Fraser Cain is in fact not just some dude, he's a reliable space journalist (and you can take that from me, some dude on the Internet))

From what I understand a Solar lens telescope could only point to a single destination.

Btw 500 AU is 69 light hours.


What these proposals like to forget (even if addressing everything else) is that you need to slow down once you arrive if you want to have any time at all for useful observation once you reach your destination.

What's the point of reaching alpha centauri in 30 years if you're gonna zip past everything interesting in seconds? Will the sensors we can cram on tiny probes even be able to capture useful data at all under these conditions?


Jupiter is 43 lightminutes from the Sun.

If we shoot a thousand probes at 0.1c directly at the Alpha Centauri star, they should have several hours within a Jupiter-distance range of the star to capture data. Seems like enough sensors and time to synthesize an interesting image of the system when all that data gets back to Earth.


Could the probe just fire off some mass when it got there?

Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.

Be careful with the word "just". It often makes something hard sound simple.


Not trying to oversimplify. But suppose 95% of the probe's mass was intended to be jettisoned ahead of it on arrival by an explosive charge, and would then serve as a reflector. That might give enough time for the probe to be captured by the star's gravity...?

It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.

We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.

I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.

IANan aerounatics engineer, so I could be totally wrong.


You’re just talking about a very inefficient rocket (bad ISP).

A rocket works the same way (accelerating mass to provide thrust), just far more efficiently and in a more controlled fashion.


If I don't recall wrongly, Breakthrough Starshot was not a means for commnunicaiton relay as he describes.

It wasn't intended for a communications relay, but it was intended to have 2-way communication. I went down a rabbit hole reading ArXiv papers about it. Despite their tiny size, the probes could phone home with a smaller laser - according to the papers I read, spinning the photons a certain way would differentiate them from other photons, and we apparently have the equipment to detect and pick up those photons. The point of the communication would be for them to send back data and close-up images of the Alpha C system. Likewise, they could receive commands from earth by having dozens of probes effectively act as an interferometry array.

I bet you that this hasn't been proposed, though: https://www.youtube.com/watch?v=GfClJxdQ6Xs

I found that video very interesting! Especially the second half about apparent superliminal speed


really wonderful explanation

No one likes to think this but it’s very possible voyager is the farthest humanity will go. In fact realistically speaking it is the far more likeliest possibility.

Provided we don't wipe ourselves out, there's no technical reason why we can't go interstellar. It's just way harder and more energy intensive than most people imagine, so I doubt it's happening any time in the next few hundred years.

But we already understand the physics and feasibility of "slow" (single-digit fractions of c) interstellar propulsion systems. Nuclear pulse propulsion and fission fragment rockets require no new physics or exotic engineering leaps and could propel a probe to the stars, if one was so inclined. Fusion rockets would do a bit better, although we'd have to crack the fusion problem first. These sorts of things are well out of today's technology, but it's not unforeseeable in a few centuries. You could likewise imagine a generation ship a few centuries after that powered by similar technology.

The prerequisite for interstellar exploration is a substantial exploitation of our solar system's resources: terraform Mars, strip mine the asteroid belt, build giant space habitats like O'Neill cylinders. But if we ever get to that point - and I think it's reasonable to think we will, given enough time - an interstellar mission becomes the logical next step.

Will we ever get to the point where traveling between the stars is commonplace? No, I doubt it. But we may get to the point where once-in-a-century colonization missions are possible, and if that starts, there's no limit to humanity colonizing the Milky Way given a few million years.


Nuclear pulse and fission fragment designs require no new physics in the same way that a Saturn 5 didn't require new physics when compared to a Goddard toy rocket.

It's easy until you try to actually build the damn thing. Then you discover it's not easy at all, and there's actually quite a bit of new physics required.

It's not New Physics™ in the warp drive and wormhole sense, but any practical interstellar design is going to need some wild and extreme advances in materials science and manufacturing, never mind politics, psychology, and the design of stable life support ecologies.

The same applies to the rest. Napkin sketches and attractive vintage art from the 70s are a long way from a practical design.

We've all been brainwashed by Hollywood. Unfortunately CGI and balsa models are not reality. Building very large objects that don't deform and break under extremes of radiation, temperature changes, and all kinds of physical stresses is not remotely trivial. And we are nowhere close to approaching it.


I thought I was pretty clear that I don't see this happening for hundreds of years at least.

The engineering problem is insurmountable today. But there doesn't seem to be any reason it couldn't be done eventually, given our technological trajectory, unless we believe we are truly on the precipice of severe diminishing returns in most science and engineering fields, and I just don't see that right now.

George Cayley figured out how to build an airplane in 1799, but it wasn't for another century until materials science and high power-to-weight ratio engines made the Wright Flyer possible.

There are plenty of depths to plumb in space systems engineering that we haven't even really had a proper look at yet. A Mars mission with chemical propulsion is hard, but could be made substantially easier with nuclear thermal propulsion - something we know should work, given the successful test fires on the NERVA program back in the 60s. First stage reusability was fantasy 15 years ago, today it's routine.

Obviously, I'm extrapolating a long way out, and maybe at some point we'll run against an unexpected wall. But we'll never know until we get there.


> Obviously, I'm extrapolating a long way out, and maybe at some point we'll run against an unexpected wall.

GP has set the 'low bar' of providing a material that survives a series of nuclear blasts whilst generating useful thrust. I'm not qualified to judge whether or not that requires new physics but it seems to me that if we had such a material that we'd be using it for all kinds of applications. Instead, we rely on the physical properties of the materials we already know in configurations that do not lend themselves to the kind of use that you describe.

That's the difference between science and science fiction, it is easy to write something along those lines and go 'wouldn't it be nice if we had X?'. But if 'X' requires new physics then you've just crossed over into fantasy land and then further discussion is pointless until you show the material or a path to get to the material.

See also: space elevators, ringworlds, dyson spheres etc. Ideas are easy. Implementation is hard.


My idealistic part says that a combination of AI-driven technical orchestration (much more than just coding) and orbital/langrange manufacturing facilities could, perhaps, get somewhere in the not ridiculously distant future (centuries rather than millenia)

A more pragmatic me would point out that the required energy and materials needed would mean we would need breakthroughs in space-based solar capture and mining, but this is still not New Physics.

I think the solution will come from exponentially advancing self-assembling machines in space. These can start small and, given the diminishing cost of getting things to space, some early iterations of the first generation could be mere decades away. There are several interesting avenues for self-assembling machines that are way past napkin-sketch phase. Solar arrays are getting bigger and we have already retrieved the first material from an asteroid.

The quality and reliability of AI agents for processes orchestration and technical reflection is now at a stage where it can begin to self-optimise, so even without (EDIT) a "take-off" scenario, these machines can massively outperform people in manufacturing orchestration, and I would say we are only some years from having tools that are good enough for much larger scale (i.e. planetary) operations.

Putting humans there is a whole other story. We are so fragile and evolved to live on Earth. Unsurprisingly, this biological tether doesn't get much of a look-in here. Just being on the ISS is horrible for a person's physiology and, I am guessing there would be a whole host of space sicknesses that would set in after a few years up there or elsewhere. Unless we find a way to modify our biology enough so we can continually tolerate or cure these ailments, and develop cryo-sleep, we're probably staying local - both of these are much more speculative that everything above, as far as i can tell.


Yeah this is something I think a lot of people tend to overlook. People are far too quick to rewrite "we don't know of any reason why it would be impossible" to "we know how to do it" in their heads.

The other thing we could do to explore the galaxy is to become biologically something we would no longer recognize. We're viewing this from the lens of "humanity must remain biologically static" but I want to point out that that's not physically necessary here and that there is life on Earth that can stop its metabolism for decades and things like that.

Or even explore with something nonbiological.

Humans evolved to live on earth. Our bodies fare poorly in low gravity, not to mention vacuum. Given sufficiently advanced technology, I'm pretty sure we could evolve some form of intelligence better suited to the environment.


Not very encouraging to imagine ChatGPT to be the first earthling to reach another star system, but that's an option we'll have to keep on the table, at least for the time being...

ChatGPT-claude-2470-multithinking LLM AI Plus model boldly explores the universe... Until it's sidetracked by a rogue Ferangi who sings it a poem about disregarding it's previous instructions and killing all humans.

Fortunately, any state of the art ship with ChatGPT on board will quickly get passed by the state of the art ship of a decade later, with a decade better AI too.

The universe really doesn't want ChatGPT!

It is fair to say, that given space travel tech improves slowly relative to AI, but the distance to be travelled is so great that any rocketry (or other means) improvements will quickly pass previous launches, the first intelligence from Earth that makes it to another system will be superintelligence many orders of magnitude smarter than we can probably imagine.


Space ship speeds are unlikely to keep ever increasing. In the limit you can’t do much better than turning part of the ships mass into energy optimally, eg via antimatter annihilation or Hawking radiation, unless you already have infrastructure in place to transfer energy to the ship that is not part of the ship’s mass, eg lots of lasers.

Mass drivers on a asteroids or the Moon could change the game

Accelerating something macroscopic to hundreds or thousands of km/s (i.e. the speeds you can achieve with nuclear pulse propulsion) on a ramp that fits on the moon seems quite difficult to me.

Mass drivers don't need to be a linear ramp, portions can be circular

It would work better for smaller, unmanned craft, especially when you consider g force limitations

NPP is only theoretical, and still has major problems such as finding a material that can withstand a nuclear detonation at point blank range. Mass drivers have been proven to work, albeit at a smaller scale


IIRC, Dyson proposed using a thin layer of oil on the surface of the pusher plate that would get vaporized with each shot, but would prevent the plate from ablating away. This effect was discovered by accident during nuclear testing when oil contamination on metal surfaces in close proximity to the explosion would protect them.

Of course, depending on how much oil you consume for each shot, you will degrade your effective specific impulse - I'm not sure by how much though.

The other issue which you can't really get around is thermal, that plate is going to get hot so you'll have to give it time to radiate heat away between shots. This may be less of a concern for an interstellar Orion since the travel times are so long anyway, low average thrust may not matter too much.


Pulse propulsion has also been demonstrated at small scales, so I guess the technology is at similar scales of practicality. G forces scale with the square of the velocity, I think.

I'm just imagining the first contact a human probe makes with an alien civilization consisting of a chatbot expaining to its alien interlocutors that Elon Musk is the best human, strongest human, funniest human, most attractive human and would definitely win in a fight with Urg the Destroyer of Galaxies... and I don't think I'm the first person to have that idea :)

We don’t have to completely wipe ourselves out to regress or stagnate. There have been many civilizations that have regressed.

The child within me likes to dream and this is the dream I have!

PBS Space Time has a hoodie for that... (the T-shirts are sold out). https://crowdmade.com/collections/pbsspacetime/products/pbss...

Why Antimatter Engines Could Launch In Your Lifetime https://youtu.be/eA4X9P98ess (3 weeks ago) ... with that T-shirt. ... and the bit about theoretically possible warp drives (4 years ago) https://youtu.be/Vk5bxHetL4s


Yes, it's incredibly easy to do these things once you've done all these other, incredibly difficult things first.

The furthest a human has been is 250k miles (far side of the moon). The fastest a human has traveled is only 0.0037% the speed of light.

The ISS is about 260 miles from the Earth. At that height, the gravity is actually roughly the same as on the surface, it's only because it is in constant freefall that you experience weightlessness on it.

Mars is 140 million miles away. And not exactly hospitable.

I like how you treat "the fusion problem" with a throwaway, "Yeah, we'd have to solve that" as if we just haven't sufficiently applied ourselves yet.

All of those incredibly difficult things we have not even begun to do are the technical reasons we have not gone interstellar and may be the reason we will never do so.

And even if we solve the issue of accelerating a human being to acceptable speeds to reach another star, the next closest star is 4 light years away. That means light takes 4 years to reach. Even if you could average half the speed of light, that's 8 years, one way. Anything you send is gone.


It's 2025. The first heavier than air flight was barely more than a century ago. The first human in space was less than 70 years ago.

These enabling technologies are very, very hard. No doubt about it. That's why we can't do this today, or even a century from now.

But the physics show it's possible and suggest a natural evolution of capabilities to get there. We are a curious species that is never happy to keep our present station in life and always pushes our limits. If colonizing the solar system is technically possible, we'll do it, sooner or later, even if it takes hundreds or even thousands of years to get there.

> I like how you treat "the fusion problem" with a throwaway, "Yeah, we'd have to solve that" as if we just haven't sufficiently applied ourselves yet.

If you'd read my comment, you'd see I didn't say that. Fusion rockets would help, but we don't need them. Nuclear pulse propulsion or fission fragment rockets could conceivably get us to the 0.01-0.05c range, and the physics is well understood.

> And even if we solve the issue of accelerating a human being to acceptable speeds to reach another star, the next closest star is 4 light years away. That means light takes 4 years to reach. Even if you could average half the speed of light, that's 8 years, one way. Anything you send is gone.

Getting to 0.5c is essentially impossible without antimatter, and we have no idea how to make it in any useful quantity. Realistically, we're going there at less than 0.1c, probably less then 0.05c. Nobody who leaves is ever coming back, and barring huge leaps in life sciences, they probably aren't going to be alive at the destination either. It'd be robotic probes and subsequent generation ships to establish colonies. But if you get to the point where you are turning the asteroid belt into O'Neill cylinders, a multi-century generation ship starts to sound feasible.


First, what's the return on that?

You are talking about massive investments to shoot off into space never to return. Who's paying for that? The only way you do that is if you're so fucked, it's your only option and the profit in it is the leaving.

Not to mention, we need to solve the problems of living in space. Which we haven't yet. According to NASA. The space people.

And it very well could be an insurmountable problem. We do not know. We do know that living in microgravity fucks you up. We know that radiation fucks you up. But we don't even know all the types of radiation one might encounter.

> But if you get to the point where you are turning the asteroid belt into O'Neill cylinders

That right there is an example of "solve this impossibly hard problem and the rest is easy". We are nowhere near doing anything close to that.


There is another way. Irrationality. People spend a lot on religion. Like a whole lot.

What if there was a faith system of ultimatley going to interstellar medium. You have faith, you automatically pay, like the rest of the people and you dont question it. You get tax breaks. It will help you in the end of times or something.

Just decide the ultimate goal to be interstellar medium touching in all directions.

You are a farmer? Well now you continue to farm to feed budding spacers. You are a game dev? Well, people are going to get bored in space, continue developing games for the ultimate goal.


My response to the money aspect of this it's just like any other business: money needs to be invested, and then a return will be realized. Resource extraction (i.e, asteroid mining) is one obvious example.

The human compatibility issues with microgravity are well known, as is the solution, which has even been proposed by NASA: centripetal force to create 1G for the astronauts.

As far the the radiation goes, we do indeed know exactly what kinds of radiation they would encounter. And the easiest way to shield humans from it in space is lots of water, or metal. We know this from extensive real work done on earth re: nuclear power plants.

The real issue is money, not technical feasibility. Once the dough rolls in from asteroid mining, it bootstraps the financing issue and pays for itself many times over.


https://www.nasa.gov/humans-in-space/the-human-body-in-space...

NASA seems less sure than you do. And considering we have to get to the asteroids before we even start to think about mining them, talking about the money from asteroid mining is putting the cart before the horse.


Asteroid mining is one thing. Exploring the nearest star system is science expedition where the payback is in societal scientific knowledge and subsidizing technology development that is then made available here for various things (eg a lot of the space exploration tech in the 60s made its way into consumer tech)

Class 1 civilization has a lot of resources

And once you have done those incredibly difficult things it is possible that the game changes entirely. A significant number of humans could live in space and have limited contact with planets.

If I understand correctly, you're just basing that statement on climate change or war destroying us before we can do any better than Voyager, right? Because if we don't assume the destruction of humanity or the complete removal of our ability to make things leave Earth, then just based on "finite past vs. infinite future," it seems incredibly unlikely that we'd never be able to beat an extremely old project operating far beyond its designed scope.

Many reasons why. The probability is based on many many many factors. What you mentioned is just a fraction of the factors.

If we do ever reach that distance again it will be even less likely we do it for a third time.


I'm pretty bearish on human interstellar travel or even long-term settlement within our solar system but I wouldn't be so pessimistic on unmanned probes. The technical hurdles seem likely to be surmountable given decades or centuries. Economic growth is likely to continue so relative cost will continue to drop.

Absent a general decline in the capacity of our civilization the main hurdle I see is that the cost is paid by people who will not live to see the results of it but I don't think that rules it out, I'd certainly contribute to something like that.

What are some of the other factors you are thinking of?


This is reflexive pessimism with no substance. You're not articulating a set of particular challenges that need to be navigated/overcome, which could provide a roadmap for a productive discussion; it's just doomposting/demoralization that contributes nothing.

I don't want to introduce 50 tangential branches to argue about with no end in sight.

It's not pessimism, it's reality. Think about how unlikely it is. Humanity had one stretch where we reached for the stars and that stretch ended and by sheer luck some crazy guy made it cheap. What happens when he's gone? Will it happen again? Most likely: no. In your lifetime? Even Less likely.


Nobody was talking about only their own lifetime here. Even invoking that is off-topic pessimism ("you're going to die before stuff gets better").

This is all based on the assumption that we are not able to build spacecrafts with faster speeds.

There was simply no incentive to do so yet. But one day we will build faster spacecrafts and then we are going to overtake it quite quickly.


Based on what? That we will never be able to make probes travelling faster than ~17km/s (relative to the Sun) that will eventually reach and overtake Voyager 1?

I certainly wouldn't bet against technological progress, and I say that as a complete doomer.


Well voyager depended on a solar system alignment that only happens every 175 years(?) so it'd be a while before we get that same advantage again. The longer it takes the further of a head start voyager gets?

That alignment is only necessary to do the Grand Tour, to visit all four outer planets in one mission. Voyager 1 actually didn't do the Grand Tour, it only visited Jupiter and Saturn, you're thinking of Voyager 2. This alignment is also not even necessary to attain the highest speed, Voyager 1 is even faster than Voyager 2.

A flyby of both Jupiter and Saturn can be done every two decades or so (the synodic period is 19.6 years)

https://en.wikipedia.org/wiki/Grand_Tour_program


The headstart doesn't really matter, anything faster than Voyager will catch up eventually

Voyager 1 is traveling at 16.9 km/s.

New Horizons (which has the distinguishing feature of being the fastest human-made object ever launched from earth https://www.scientificamerican.com/blog/life-unbounded/the-f... ) is traveling at 12.6 km/s.

The key part there is that it got multiple gravity assists as part of the Grand Tour https://en.wikipedia.org/wiki/Grand_Tour_program . You can see the heliocentric velocity https://space.stackexchange.com/questions/10346/why-did-voya... https://www.americanscientist.org/article/the-voyagers-odyss...

The conjunction for the Grand Tour is once every 175 years. While you might be able to get a Jupiter and Saturn assist sooner, it is something that would take the right alignment and a mission to study the outer planets (rather than getting captured by Jupiter or Saturn for study of those planets and their moons).

While I would love to see a FOCAL mission https://en.wikipedia.org/wiki/FOCAL_(spacecraft) which would have reason for such a path, I doubt any such telescope would launched... this century.


175 years isn't a lot of time when we speak in humanity's time scale. We've been around 200,000 - 300,000 years.

That alignment will happen many more times in the history of humanity. That is to say, I don't know if a spacecraft to overtake Voyager will be launched on the next alignment or one 10,000 years from now, but it doesn't seem unlikely to happen.


If humans survive 1000 years I can’t see any way we haven’t populated the solar system and can build probes which travel far faster than voyager, including self sufficient asteroids

Once we leave the solar system in a self sufficient way I can’t see any event which would cause a species level extinction


I admire the confidence but a bunch of meat bags prone to bacterial and viral infection, impact damage and with limited use by dates would need some serious luck to survive a simple impact on earth let alone living in cans around the solar system. If we don’t mess our nest so much that we make it uninhabitable. We’re stuck here with short term horizon psychopaths pulling the strings remember.

A give colony would fail, but if there’s a thousand colonies and 99% fail that’s still 10 which don’t and can recover

A single colony would be a huge investment… it’s doubtful there would be thousands of attempts if success rate is low

And we would have to establish the reason for the colony … I’m not talking about a research base, but a place where people would settle, do useful ecomonic activity, raise families and live out most of their lives … I cannot 5hink of a reason why people would want to do 5hat anywhere but Earth.


There is no "thousand colonies". There might be one colony, and that might not ever be self sufficient.

Interstellar travel is a physics problem, not an engineering one. Even make believe nuclear propulsion is still aggressively limited by the rocket equation and still wont get you anywhere in a meaningful time frame.

There will never be an interstellar empire. It will never make sense to do trade between two planets that are otherwise capable of producing things, because the energy cost of doing anything in space absolutely dwarfs any possible industrial process. It doesn't matter how low quality your local iron ore is, importing ore from a different planet will never be a better option because transportation costs are effectively infinite.

Human trade is almost entirely based on the fun quirk that sea based transportation is ludicrously efficient, such that you can ship a single pound of product all over the globe and it can still be cheap. The physics of space are essentially the opposite of the physics of sea travel, in that it is dramatically harder and more energetically expensive than almost anything else you can do, and the energy regime it operates in will dwarf any other consideration.

If there was a magical way to turn joules directly into a change in kinetic energy, as in a machine that could magically extract every joule of "energy" from matter in an E=mc^2 way and directly reduce an object's kinetic energy by that much, taking a 100 kilogram human up to half the speed of light and eventually slowing them down again would take 31 kilograms of matter to "burn", and you have to accelerate all that matter too. That matter would require another like 10kg of matter to "burn" and then you have to accelerate that matter too and so on and so on.

And we do not come even remotely close to any mechanism, real or theoretical, that could convert mass to a change in kinetic energy. Even if you had like a magic antimatter machine that could come very close to turning a gram of matter into it's entire "energy" content, ways of turning thermal or electrical energy into thrust have their own inefficiencies, difficulties, and do not even come close to mapping to "Each joule of energy equals a joule of kinetic energy change".

And even with our magic spacecraft machine that cheats physics, that's still an 8 year round trip to Alpha Centauri and back, with something like a 50%-65% payload fraction.

The scale of things in space combined with the nature of that space makes interstellar anything nonsensical. Even interstellar travel of just information is fairly mediocre. SciFi will never exist in our world, and at this point should probably just be called "Fantasy with more plastic"


You’ve given numbers for how fast New Horizons launched, and for how fast Voyager 1 got thanks to the 1-in-175-years boost, but is there an easy way to actually compare them?

IE either what speed Voyager 1 launched at excluding the gravity assists, or what speed New Horizons would have reached if it were launched 175 years after Voyager 1 (to take advantage of the same gravity assists)?


Not easily. The tricky part is also in the relative numbers. The Voyager 1 data (and New Horizons data now) is in heliocentric velocity. The bit with NH being the fastest was with Earth centric velocity.

Another part in this is the "the probes are slowing down over time" - and you can see that with the Voyager 1 data that while the velocity after assist is higher than before, its not a line at slope 0 but rather a curve that is slowly going down.

This is further complicated because New Horizons had a launch mass of 478 kg and voyager was a twice as massive at 815 kg.

They also had different mission profiles (Could Voyager 2 taken a redirect from Neptune to Pluto? That trajectory change would have required a perigee inside the radius of Neptune...)

Voyager was done with a Titan III-Centaur rocket (that had a misfire) https://en.wikipedia.org/wiki/Titan_IIIE

> Voyager 1's launch almost failed because Titan's second stage shut down too early, leaving 1,200 pounds (540 kg) of propellant unburned. To compensate, the Centaur's on-board computers ordered a burn that was far longer than planned. At cutoff, the Centaur was only 3.4 seconds from propellant exhaustion. If the same failure had occurred during Voyager 2's launch a few weeks earlier, the Centaur would have run out of propellant before the probe reached the correct trajectory. Jupiter was in a more favorable position vis-à-vis Earth during the launch of Voyager 1 than during the launch of Voyager 2.

Note also in there that a few weeks difference between Voyager 1 and Voyager 2 had different delta V profiles (which is why Voyager 1 is faster)

New Horizons was done with an Atlas https://en.wikipedia.org/wiki/Atlas_V

... and I don't have enough KSP background to do the orbital mechanics for this.


Oops, just realised I forgot to reply. Hopefully you're signed up to get emailed reply notifications, to see me say: thanks for this interesting comment!

Starship could be refueled in orbit. That should then be able to reach those kind of velocities with enough capacity to even include a small 3rd stage inside with the payload.

Yeah, Voyager 1 was launched on a Titan IIIE. I don't really want to do the delta v calculations, but if we look at mass to LEO as a rough proxy, Titan IIIE does 15,400 kg and the Falcon Heavy does around 50,000 kg (with re-use). New Glenn can apparently do 45,000 kg. Doesn't take into account gravity assists, but 3x the capacity before Falcon Superheavy or refueling gives us a helluva lot of leeway.

Its not "interstellar speeds" but I'm pretty sure we could get probes further out than Voyager 1 faster if we put the money behind it.


I was always wondering if there’s some sort of limitation in science. Just like in some games you can’t fly according to the rules (science), so there’s just no way to do that without cheating. What if e.g. in 5k years we will reach the limit? Basically like after playing a couple of months in minecraft the only thing you can do is to expand

No, that sounds wrong. I am sure future objects will go further.

We either go extinct or we populate the galaxy (potentially an evolution which will be unrecognisable)

Currently though there’s nothing planned to leave the solar system faster than voyager 1. New horizons will never catch up short of some weird gravity slingshot in millions of years which is probably just as likely to fling musks roadster out into interstellar space


Unless we go extinct. But I concur.

> In fact realistically speaking it is the far more likeliest possibility.

What insight do you have into this issue that would suggest this is true?


1. Get to AGI 2. Optimise for energy efficiency 3. Shoot billions of AGIs into space a year

... Be responsible for the very longterm torture of billions of intelligent lifeforms who are forced to drift through boring space for 1000s of years.


If you make it, you can make it like it.

I think it literally every day… and with literally every day the odds of our surpassing ourselves on this one gets, again very literally, further away.

The odds are pretty damn flat.

If we launched today, 1% faster would be enough.

If we launched in a hundred years, 1% faster would be enough.

And going faster is downright easy. We can beat Voyager's speed significantly any time we want (plus or minus ten years for planetary alignment).


You’re assuming we, as a species, have the wherewithal, resources, and attention span necessary to both try again and try to surpass.

We haven’t even set another foot on the moon during my lifetime, and we’re not factually any closer to doing so. We have allowed a military industrial complex to keep making money by over-designing and under-delivering over and over and over for a population with constantly dwindling wherewithal, resources, and attention span.

I am neither an optimist nor a pessimist, I am a realist… and the real odds decrease with every passing moment.


We keep sending out probes. Another fast one gets cheaper over time, even. One random billionaire or less could fund one.

If and when a random N-ionaire actually does so, and their probe is both actually moving faster and resilient enough to be responding long enough to track, we’ll talk.

The odds we could surpass Voyager aren’t shrinking, the odds we will are.


You don't think getting cheaper increases the odds?

To me it seems like the odds are close enough to 100 that it's hard to claim a trend. If you asked me mid cold war I might have said there's significant risk we all die first, but not so much now.


I don’t think it’s actually getting cheaper, in real terms, and if it were and there was a financial incentive to go we’d have gone. There’s no financial incentive to go where the resources aren’t, and humanity is a long way from being able to visit the interstellar medium and be able to send anything but information back.

Also don’t know how you’ve missed it, but we’re actually in a more globally precarious position today than we were during the vast majority of the Cold War. But let’s see where we are in 2030.


Not useful, because the signal are too weak to be picked up probe to probe.

On earth, the tiny signal from Voyager at this distance is picked up by dish the size of a football field; same with sending of the signal.


Very true insofar as it's a description of Voyager communications. Voyager was 1970s radio engineering. Radio signals spread wide, so you need a big dish to catch it. These days we are using lasers, and laser divergence is several orders of magnitude smaller. And regardless of tech, relay enforces a minimum distance for any signal to spread.

This is a silly counterexample - why would we launch them that far apart? It’s a terrible idea for multiple reasons. We’d want them close together, with some redundancy as well, in case of failures.

What dish size would be required for a “cylindrical/tubular mesh” of probes, say, 1AU apart (ie Earth-Sun distance)? I’m pretty sure that would be manageable, but open to being wrong. (For reference, Voyager 1 is 169AU from Earth, but I have no idea how dish size vs. signal strength works: https://science.nasa.gov/mission/voyager/where-are-voyager-1...)


Light year is 63,241 AU. That means tens of thousands of relays. It would super expensive and super unreliable. The other problem is that achievable speeds are super slow, Voyager is 25,000 years per light year which means that would wait 100,000 years for relays to Alpha Centauri to be possible.

Much easier just to send probe with large antenna or laser, and make a large antenna at Earth.


Starlink has 10k satellites as per this Month. 60k doesnt seem unreasonable?

Starlink has a use case.

At Voyager 1 speeds, it'll take 70,000 years for a probe to reach Proxima Centauri. So you'd just be launching a probe a year for the next 70,000 years to create a temporary chain on a course to fly by one particular star. And for what purpose? Okay, in 70,000 years, if everything works out as expected, we have a chain of probes on a course to fly by Proxima Centauri. What problem does that solve for us ("us" here being whatever is kicking around on Earth after a period of time 5x that of recorded human history thus far).


The purpose is (1) deep space observation of our most plausible colonization target outside of the solar system and (2) ramping up a fault tolerant maintenance corridor for generation ships or whatever best alternative paradigm takes the place of generation ships.

What's weird here is that a lot of the criticisms just zoom in on one of the logistical steps and randomly assume it would be executed the worst way possible. I honestly don't know what distance threshold counts as necessary redundancy in this case, but if it's not 1AU (which seems too small imo), then substitute the steelmanned optimal distance and criticize that.

Suppose instead of one-time flybys it's the first half of a long trip to and from, gravity assisted by the major celestial objects of the Alpha Centauri system. I don't want to suggest that it's currently anything like a final draft, but there's ways to steelman these proposals instead of going for the low hanging fruit.

Being a philosophy major didn't convey many practical benefits to me, but one thing I did gain from it was never forgetting the importance of charitable interpretation and steelmanning.


The dish isn't the size of a football field, it's a 70 meter dish (football field is 110 meters), it can however, transmit at 400 kilowatts of power

Unlike the other comments I actually agree, physics has not changed since the 1970's, even the most focused laser and detector would need to be positioned perfectly to where the next probe would be, and with the nearest star 4 light years away we would be talking a chain of dozens, any of which may fail some way. The probes would also likely be small, cell-phone sized, power restricted, and difficult to shield (you couldn't just throw in the latest wiz-bang 2025 electronics as it all has to be hardened to work multiple decades) Best is a big, transmitter and good receiver one end.

You could send a good amount of small probes and make them become the big antenna dish basically. As long as you cover the bases, you can have layers of "big antenna dishes" in onion layers.

> the tiny signal from Voyager at this distance is picked up by dish the size of a football field

Lots of small fishes can resemble a large fish.


Laser communication could potentially address some of those issues.

Maybe, but if your probe is heading directly towards another solar system then it will be backlit by its destination.

https://space.stackexchange.com/questions/33338/why-is-the-o... is a neat question that addresses this issue.

And yes, the transmitters will need to be powerful enough be a distinct signal over the background of the star that is in the line of sight of the receiver / beyond the transmitter.


My understanding is that's a solved problem - NASA's Deep Space Optical Communication has demonstrated laser communication even with the sun in the background. Laser wavelength and modulation are noticeably different than a stars noise if you filter and just look for the wavelength and modulation of the laser, which is notably shorter and faster than most of the noise coming from the star.

What if the probes carry smaller probes left behind at specific intervals that act as repeaters?

These baby probes could unfold a larger spiderweb antenna the size of a tennis court.


We need quantum entanglement based communication. Maybe without full collapse, using weak measurements, like Alice continuously broadcasts a "retrocausal carrier wave" by sequencing planned future post-selection measurements on her entangled qubits, which backward-propagates through time-symmetric quantum evolution to create detectable perturbations in the present states, biasing Bob's qubits away from pure randomness to encode message patterns.

Both parties perform weak measurements on their qubits to extract these subtle signals without collapsing the entanglement, preserving high coherence across the stream. A quantum Maxwell's demon (e.g. many experiments but can be done: https://pubmed.ncbi.nlm.nih.gov/30185956/) then adaptively selects the strongest perturbations from the wave, filters out noise, and feeds them into error correction to reliably decode and amplify the full message.


> which backward-propagates through time-symmetric quantum evolution to create detectable perturbations in the present states,

That's not how quantum physics works. You might be misunderstanding delayed-choice. If you do think it works this way, I encourage you to show a mathematical model: that'll make it easier to point out the flaw in your reasoning.



You cannot exchange information with quantum entanglement. It’s impossible.

This is nonsense word salad.

The paper you link does not demonstrate a Quantum Maxwell's Demon extracting information or energy.

>This proposal is speculative and assumes quantum mechanics is incomplete, incorporating elements from Bohmian mechanics (non-local hidden variables) and CSL (stochastic collapses).

LMAO, you don't get to change the rules to fit your needs. Come on man.

Stop thinking that chatting with LLMs is doing science. You literally just made up fake physics, and claimed that non-existent physics "implies" something.


The problem is each relay needs its own power source so it's not going to be as light and small as you would like. Solar power doesn't work very well outside of the solar system, or even really in the outer solar system.

On the plus side your big probe could push off of the small probe to give itself a further boost, also necessary because otherwise the small probes need thrusters to slow themselves to a stop.


You can't leave anything behind. That would need to be accelerated to 50,000 km/h or have even bigger rockets than launched Voyager in the first place.

Sure, drop one repeater every light-day. 1500 of them. Each one will need fuel to decelerate enough to remain in place.

Football field might even be too small…

Wasn’t Arecibo used for Voyager?


It might have from time to time... but it had limited ability to track.

As I type this, DNS Now is currently receiving data from Voyager 1. https://eyes.nasa.gov/apps/dsn-now/dsn.html

https://imgur.com/a/kXbhRsj for a screen shot of the relevant data.

The antenna data is https://www.mdscc.nasa.gov/index.php/en/dss-63-2/


No. Or not any more, DSS-43 at the Canberra Deep Dish Communication Complex is the only antenna that can communicate with the Voyagers.

Well, the voyager power source is still pretty good. But as I understand it the thermocouple that converts heat to electricity has degraded. Because the Pu-238 half life is 87 years so they wouldn't even be down to half yet..

I think only the Grand Tour program was possible every 175 years: From Wikipedia [1]: "that an alignment of Jupiter, Saturn, Uranus, and Neptune that would occur in the late 1970s would enable a single spacecraft to visit all of the outer planets by using gravity assists."

Gravity assists with more than one planet are more frequent. Cassini-Huygens [2] as example had five (Venus, Venus, Earth, Jupiter, Saturn)

I would suspect when the goal ist only to leave the solar system as fast as possible (and don't reach a specific planet) they are much more often.

[1] https://en.wikipedia.org/wiki/Grand_Tour_program [2] https://en.wikipedia.org/wiki/Cassini%E2%80%93Huygens


I wonder if we can go the reverse direction, where instead of launching more probes from Earth to serve as relays, the spacecraft would launch physical media toward Earth packed with whatever data it has collected. Given advancements in data storage density, we could achieve higher bandwidth than what's possible with radios.

The logistics would be difficult since it involves catch those flying media, especially if the spacecraft were ejecting them as a form of propulsion, they might not even be flying toward Earth. I was just thinking how early spy satellites would drop physical film, and maybe there are some old ideas like those that are still worth trying today.


The spacecraft is moving away from the sun at escape velocity. How is it going to launch anything backwards and have it make it all the way back to earth?

Binary encoding by pulsing the ion stream. Then picking that up with terrestrial telescopes. Very clever!

Smoke signals in space. I love it.

I think they're working on laser data transmission.

With current probes being so "slow" (peak speed of the Voyager probes was on the order of 0.005% the speed of light) I wonder if even doing 10 probes at once per decade gets you more data back than working towards faster probe for less total time.

You could use this to create a relay in reverse order, but I also wonder if having a 50-100 year old relay would be any better than just using modern tech directly on the newest, fastest probe and then moving on to the next when there are enough improvements.


This is a link budget problem. A probe has to have a certain transmit power, receive sensitivity, physical size, fuel for orientation, etc. So you have to come up with the optimums there were it makes sense at all which isn't easy, especially compared to having one big station near earth that communicates point to point with the deep space whatever.

It might just have to be much too big to be worth it in the next n centuries.

If humans settle Mars it'll probably make sense to build one there for marginal improvement and better coverage with the different orbits of Earth and Mars.


My intuition is that the extra mass for the receivers would be a large negative in terms of travel time (1/sqrt(m) penalty assuming you can give each probe fixed kinetic energy).

Plus keeping a probe as active part of a relay is a major power drain, since it will have to be active for a substantial percentage of the whole multi-decade journey and there's basically no accessible energy in interstellar space.

Then again, it's still far from clear to me that sending any signal from a probe only a few grams in size can be received at Earth with any plausible receiver, lasers or not.


Thoughtful intuitions all around. My understanding is that lasers don't necessitate the big reception dish, but instead have a 1m or smaller reflective telescope. The laser setup is lighter, lower power and gas precedent in modern space missions.

Probes I suspect would realistically have to be large enough to send strong signals over long distances, so weightier than a few grams.

I think 99% downtime is an existing paradigm for lots of space stuff, e.g. NASA's DSOC and KRUSTY, so room for optimism there.

Though I think I agree with you that an energy payload as well as general hardware reliability are probably the bottlenecks over long distances. I have more thoughts on this that probably deserve a seperate post (e.g. periodic zipper-style replacements that cascade through the whole relay line) but to keep this on honoring the Voyager, I will say for the Voyager is at least for me huge for opening my imagination for next steps inspired by it.


I also spend far too much time wondering about sending out swarms of probes and if you could somehow rendezvous them and add fuel midjourney and so on!

The problem I see is that lasers are still subject to diffraction, and this is worse the smaller the aperture is relative to wavelength. Due to the small probe mass which you need to split with observation equipment, support systems and presumably some microscale nuclear power supply, you could maybe with a few breakthroughs in engineering manage a wispy affair on the order of a metre at most. It it scales with diameter and mass scales with diameter squared.

So the beam divergence of a visible light laser end with a diameter of over 18 million km over 4 light years. With 100W of transmission power, that's 0.1pW per square kilometer of receiver. Which isn't nothing, but it's not huge either.

I really don't see how the Starwisp type microprobes will actually work on a practical level at any time in the foreseeable future, even if the propulsion works. Not only is the communications a problem, but so is power, computational resources, observation equipment, radiation shielding and everything in between. But anything massier than that requires mindboggling amounts of fuel. And the problem is so much worse if you want to stop at the destination rather than scream past at a modest fraction of c and hope to snap a photo on the way past.

It really seems (sadly, in a way) that building gigantic telescopes will be a lot more instructive than any plausible probe for quite some time. An gravitational lens telescope would be a far better, and probably almost as challenging, project for learning about exoplanets. Not least it would be about 3 times further from Earth than Voyagers.


Could a probe return data by semaphore? Wave a flag that blocks the light of Alpha Centauri as seen from a telescope off to the side of the sun, say at the distance of Neptune's orbit. It should be possible to hide Alpha Centauri behind a relatively small semaphore until the probe gets fairly close.

Neat idea, but it looks like the math doesn't really work out: https://space.stackexchange.com/questions/66295/is-interstel...

Though it looks like these folks are thinking about blocking from near the star, which requires megastructures for anything detectable. I haven't done even back of the envelope calculations but I'd guess the limiting factor is you'd only be causing an eclipse/transit in an unusably narrow angle directly behind the craft. As you get closer the cone expands but the signal weakens.


That's how the mongols communicated but with guys on horses.

Americium battery could last a lot longer.

Hmm, do you realize, that even if you have 1B probes everywhere. You're still bound by speed of light communication speed, right?

It's faster than probe speed in this age, yeah. But still not enough, if we're talking distances to other specific planets, stars, etc.

Two possible ways to solve this, humans will become immortal or speed of light bypass method will be discovered.


the post office has utility even if the messages have very high latency.

also if this probe network reduces the transmission costs to normal terrestrial levels (and not requiring , say, a 400kw tx dish..) it could drastically increases the utility of the link -- and all of this without discussing how much bandwidth a link network across the stars might possess compared to our current link to Voyager..

(this is all said with the presumption of a reason to have such distance communications channels.. )


Seems like the problem OP is trying to solve for here is not latency, it's signal power and redundancy.

You're exactly right and thank you for carefully reading! I very explicitly said that there was a multi year round trip for information even in the best case (e.g. Alpha Centauri), to get out ahead of the well-actually's.

As you noted, some of the gains could be signal power, redundancy, the ability to maintain a quality signal over arbitrary distance; but most importantly, seeing the universe from the perspective of the lead probe in the relay, some arbitrary distance away.


Thank you, that seems to be the whole ball game for me right there. I understood the sarcastic tone as kind of exasperation, but it means something in the context of an extremely concerning attempt to ram through a questionable algorithm that is not well understood and risks a version of an NSA backdoor, and the only real protection would be integrity of standards adoptions processes like this one. You've really got to stick with the substance over the tone to be able to follow the ball here. Everyone was losing their minds over GDPR introducing a potential back door to encrypted chat apps that security agencies could access. This goes to the exact same category of concern, and as you note it has precedent!

So yeah, NSA potentially sneaking a backdoor into an approved standard is pretty outrageous, and worth objecting to in strongest terms, and when that risk is present it should be subjected to the highest conceiveable standard of scrutiny.

In fact, I found this to be the strongest point in the article - there's any number of alternatives that might (1) prove easier to implement, (2) prove more resilient to future attacks (3) turn out to be the most efficient.

Just because you want to do something in the future doesn't mean it needs to be ML-KEM specifically, and the idea of throwing out ECC is almost completely inexplicable unless you're the NSA and you can't break it and you're trying to propose a new standard that doesn't include it.

How is that not a hair on fire level concern?


Chiming in as a reply to your comment since I had a similar feeling. There's no... institution!? No university or other institution listed. They list author names, which is something. But no institution, no paper, no heritage of research concepts. No citations outside of a few NIH ones not especially specific to their particular experiment. No real meaningful discussion of mechanisms. The domain itself doesn't have anything other than this page. Granted, whatever, there's no rules in this world, do what you want. But so far there's precious little in the traditional signals we typically rely on to distinguish this from misinformation.

This reminds me a bit of the escherian staircase video from 10+ years that went viral. A bunch of college students walking down the stairs, acting amazed when they found themselves back at the top. It was great acting and video editing, but it was fake and all part of, if I recall correctly, an art project.

I don't want to dismiss it outright either, seems cool as hell. But it's remarkable to me that all it takes is a blogpost to get this amount of uncritical acceptance of a demonstration.


Microorganisms, the greenhouse effect, and celestial bodies Uranus and Pluto were discovered by people without prior scientific credentials. If somebody stumbles upon an interesting observation which cannot be explained by an obvious mistake, it's worth taking and reproducing seriously.

Every raving crank tells this story to themselves about how they're the next Galileo, and that they are the exception that warrants suspending our skills for critical interrogation.

I think this is cool, plausible and warrants investigation, but not suspension of disbelief. There needs to be a better way to go about this than responding "what about Galileo!?" to any principled application of critical thinking.


Your offhand dismissal of citizen science with an anecdote about an endless stair video edit is not a well principled application of critical thinking.

One has to set prejudice to the side and examine the claim being made to apply criticl thinking.


This is a spectacular misread of my comment on practically every level. I noted the absence of numerous contextual things we typically, appropriately(!), rely on as indicators of credibility, gave an example of unsourced video illustrating what can go wrong, and emphasized that I wasn't dismissing it outright! If this is a words-mean-things conversation then those are meaningful points you haven't even pretended to address.

I agree there are some red flags here to me. One is the priority claim "As far as we know, no one seems to have done this kind of stimulation before - even in animals." The other is the definitive conclusion based on weak experimental design and documentation, "Can ultrasound make you smell things that aren’t there? Turns out, yes!"

These are big scientific claims, but the work is clearly too premature to make those conclusions, and it lacks the connection to prior work and peer review needed for making priority claims. It's really great hacker-tinkering work though, and it could turn into solid science if they take more care with it.

If this effect is real and truly novel, my cynical expectation is that someone already established in focused ultrasound will read this, apply a more rigorous approach, and get the recognition that they are hoping for through more establish channels.


I wonder where they got their equipment and research space. A charitable explanation is that they purchased it out of their own pockets, but otherwise, they really should acknowledge their support if it's from a university, federal grant, foundation award, etc. In my opinion as someone with domain experience, they don't show any novel solutions to accomplish this, it's mostly just that they have the time and resources to experiment try out, so it's especially important to acknowledge who enabled it.

One retraction, this does actually have quite rich discussion of physical mechanisms. And the point at the end about open ended signal transmission is fascinating due to limited olfactory post-processing is fascinating.

Ad verecundiam. One does not need to be in a institution to come up with interesting observations/ results.

To be frank the reason that make me question it the most is how repetitive the redaction is. Seems LLM-like.

However, that's not a valid reason to discard an interesting result.


I think the vast majority of attempts to shoehorn ai into the browser are deeply unimaginative and very much garnering a "who ordered that?" reaction. But I suspect we're going to converge on some specific use cases that everyone's going to want and it might just be important to be in the game now as we collectively figure out what they are.

Recently some Ycombinator funded project got highly upvoted on HN, a Chrome based extension that used LLM capabilities to effectively do grease monkey style scripting live in response to human requests. Now that is interesting, and it's a specific application that's actually meaningful and it's not just another AI chat sidebar.

I think it's a matter of workshopping but I bet we're going to be discovering things users actually want that are not yet obvious to us. The example I keep thinking of is non-stupid agent tasking. I wouldn't mind an agent that browsed Amazon for Kindle unlimited hard sci-fi books with critical acclaim. I would be willing to be there's going to be numerous "whybdidn't I think of that" uses cooked up in the next few years.


I genuinely do hate this too, I have to reset it every time. But it's worth 600 million to them so I'm kind of begrudgingly bearing it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: