Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How has the feasibility of AI anything to do with science fiction?

Unless you believe in some kind of magic (soul/divine spark/etc.), it seems completely inevitable to conclude that human cognition can be replicated by a machine, at the extreme end simply by simulating the whole thing.

I would argue that "language" was a defining characteristic of human intelligence (as opposed to "lesser animals") long before we even conceived of AI, and hitting language processing/understanding benchmarks that far exceed animal capabilities is a very strong indicator by itself.



To clarify my stance, I do think that AI research is a good thing and that it can help us understand our own consciousness by studying it. I just think anyone concerned about AGI at this point is drinking too much kool-aid.

I think science fiction makes us feel like AGI is closer and more relevant than it really is. Are we really so close that there's "no time" to regulate for it? Are we even close enough to give a care at all? Is there any evidence at this time at all that AGI would be a existential threat? Is all this talk coming from decades of "Creator vs. Daemon" stories combined with LLMs abilities to sound human? Does someone gain something from pointing our attention at the far-far scifi imagined future instead of current day issues with AI?

To your simulation point. We talk about our bodies, brains and conciousness like they are machines, computers, and software. They are not. They are organic and their function is highly emergent. Think about all the endless input our bodies receive, we probably don't even fully understand all the sensitivities our "consciousness" has. We don't "execute" a program based on input, in fact the input is likely part of the system. Its why our gut biome can affect our mental health, why our cells communicate with each other locally and likely guide our behaviour in ways we don't understand yet.

In other words, we're not any closer to simulating human consciousness than we are to simulating the universe itself.

So I do think its hubris to think that a rock with some switches on it running some mathematical probability model can come even close to generating consciousness.

> exceed animal capabilities

Hubris like this. We don't even have a way to measure this. We've just made a math equation that can spit out language, you're comparing this to things that can actually _experience_ without someone clicking enter on a keyboard.


> We've just made a math equation that can spit out language, you're comparing this to things that can actually _experience_ without someone clicking enter on a keyboard.

What would it mean to "experience" language properly, and what precludes LLMs from doing that?

In my view, hubris is exactly the opposite view, that our minds are somehow "more" or "better" than a bunch of matrix multiplications (with poor historical track record, see e.g. geocentrism, or the whole notions of lifeforce/soul/divinity).

I'd argue that a large amount of complexity in our brains is extremely likely to be incidental; I don't see why we would ever need to fully simulate a gut biome to achieve human-level cognitive performance (being overcomplicated certainly makes it harder to replicate exactly or to understand fully, but I don't think either of those is necessary).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: