True. I brought the Earth just as a thought trough experiment suggestion.
My point was computers will most likely be able to “think”, eventually. And if they do it in a different scale than us (which will not be 1 cycle for 1 second, that’s ridiculous) as suggested on the blog post, they will probably have issues identifying us as intelligent.
If we design its intelligence bottom up, which is the most likely way, we won’t necessarily will be able to built-in anything in such singularity. We will only be able to hope a superior intelligence won’t be harmful for us and, eventually, it will be benign.
And, as far as I can read it, your definition of intelligence does not consider grading it gradually. But sure, you can extrapolate “past and future” except that’s not as concise.
Yes but my original point was that - unlike natural intelligence which is a product of generations of trial and error and based on a multi-layered structure of chemical and electronic balances - artificial intelligence will be a product of architectural design and conscious design. It won’t have to recognize that we are intelligent - its very nature/the cause it is built around to coexist/serve our intelligence.
Suppose the DNA was built by an intelligent been, or species. We do not currently recognize them. We would have been built bottom up and even eons into it, we still don’t recognize them. And by “we” I mean “all life on earth”, within our limited human species knowledge, of course.
Currently almost everything we’ve built which is considered slightly intelligent was done top down, the way you describe. It’s like answering the question of what came first, the chicken or the egg. The egg is a bottom up approach. There is a company trying to do it bottom up: Vicarious
If a consciousness arise out of such approach, it won’t necessarily acknowledge where it came from. It most likely won’t know it for a very long time because it will be a “natural intelligence”. It will be a product of generations of trial and error.
There’s also a chance we can actually become so smart and knowledgeable of how we are doing it that even with a bottom up approach we manage somehow to make it “consciously intelligent” on purpose. Then, and only then, I’d agree with your statement. I just happen to believe this is the most unlikely way for the singularity to happen, though.
In any case, once it reaches a point of extremely superior processing capacity by definition we can’t predict if knowing its origin or knowing how to communicate with us will matter at all.
The funny part is, it still does not have to do anything with AIs created
by mankind (if we are talking about AI-based user intetfaces like the one
mentioned in the post). The only purpose an Artificial intelligence can fit
better than a smartly written procedural coded application/service is to
interface with human beings. (Which does not necessairly men real time
intetaction - most AI developet languages are created with the purpose of
text amalysis.) In fact the “inzelligence” of AI sydtems can only be found
on/near user interface levels.
Can you imagine a higjly advanced entity with the main/only purpose of
communicating with hmans whicj does not know of/believe in humans?
Sorry about the mistypes, written that reply from a phone while going to work by bus and before my first coffee.
But if something else is unclear I happily enlight my point of view. (Since I’m not a native English speaker I might not be as clear as I’m intended to.)
It’s cool, I just don’t get what you mean. I think we’re getting way too philosophical there.
I’ll just say I disagree most AI development is meant to analyse text. I think most AI development want to bring us a smart ass computer and, to that means, we start with the simplest step which is indeed analyzing text. But if you do it with just that in mind, you’ll lose the big picture.
I wasn’t heckling, but You know he [the writer] used the name Iain Banks to publish mainstream fiction and the name Iain M. Banks for publishing science fiction.
I got to about 2.6B Km before giving up and using rmc->View Source on the “If the moon were only 1 pixel” page. Much easier to navigate the solar system that way
As far as comprehensive AI learning goes (even when you limit it to vision), I’d think that learning similar to humans is impossible for a computer (or 60k “computers”) in isolation - it would absolutely require a robotic interface of some sort - interaction with an environment is required for real learning. One might be able to get closer with a simulated environment, but videos are not an environment at all - they are passive and read-only. Intelligence is very much reactive and interactive.
I’ve seen this argument for way too long, saying that computers are stupid, and we on the other hand do things without even giving a thought. Over time, I realised that they’re only as stupid as us. I’m not saying we’re stupid, although some of us are! Anyways, what shaped us humans was billions of years of evolution, and it’s done a pretty good job at it. So good a job that we’ve started an evolution of our own kind… with the computers. Our evolution is not very different though. We’ve had great hardware improvements over time which is still going on, but only recently have we had commendable software developments. This is relatable to seeing humongous and powerful (strong) animals that evolved way before us. But the true advancements only started happening when a rather meek animal with a powerful brain came to existence. Humans. I’m not just talking about the sapiens, I’m talking about all the _homo_s that evolved over time. The evolution took its time. The computer development will take its time. But we’ve seen that the computer development has a very varied time scale as opposed to the evolution’s. So, we can expect a rather fast, but directed evolution for at least a few millenniums…