The Infinite Space Between Words

If we can convert from km to miles when reading international web sites, you can just as easily convert from miles to km when reading American sites.

The more I study AI & machine learning the more amazed I am of the human brain. Google ran a computer vision learning algorithm for 3 days on something like 60k cores. The algorithm watched every single youtube video which a person probably couldn’t do in a lifetime. The result… A state of the art vision algorithm that comes nowhere near human levels of performance. Even though the computer used was incredibly fast it only simulated like a million neurons. that’s .001% of the human brain’s one hundred billion neurons. This is only considering simple neurons. Just recently neuroscientists have discovered that dendrites (the inputs to the neuron) perform computations. Before they just thought that the soma (neuron cell body) did the computation. Considering that there is an average of 10,000 dendrites per neuron this .001% could be way off. It could be more like .000001%. Suffice it to say… Computers have a long way to go.

I have done considerable work in this area already regarding the the difference of the speed of thought of humans, AIs, (and any kind of life form) and how that difference affects the ability of one to measure the L8-IQ of another. The L8-IQ Scale (IntractableStudiesInstitute.org/communications/L8_IQ_Scale.pdf) has Meta rules (like the modeling rules), Pre-requisistes for the scale, and then the actual L8-IQ scale. In order to measure the L8-IQ of an AI, or any life form, the 2nd meta rule is that the measurer and the measuree have to be at the same or near same speed of thought. This does not mean that a slower-thinking life form has less L8-IQ, it in fact could have more and higher-quality IQ, but if the speed of thought is too different, the slower one may not be able to determine the L8-IQ of the faster life form. We should not assume that a faster speed of thought implies more intelligence.

This means that the human “think-brain” may have some difficulty evaluating the L8-IQ of a much faster AI. However, the human “feel-brain”, being much faster than the human “think-brain”, may have a chance. [I model human brains as 2 distinct processing types, think and feel, with the feel-brain being much faster]. When I complete Project Andros at the Institute and copy my mind into the android robot, I will be able to provide an answer from experience, half-ways there now.

–>Pat Rael, Director, Intractable Studies Institute

1 Like

He was probably referring to traditional spinning rust hard drives, so let’s adjust that extreme endpoint for today:
Latest fastest spinning HDD performance (49.7) versus latest fastest PCI Express SSD (506.8). That’s an improvement of 10x.

Jim was referring to storage latency, but you compare sequential read speeds.

I took numbers from storagereview.com for 4K 100% read 16 threads 16 queue test, avg latency:

1800ms — 7200rpm hdd
4ms — average ssd
0.34ms — fastest pci-e ssd

Overall improvement is from 450x to 5000x, which gives 10 million and 1 million miles, it’s about 1/6 of the distance to Mars and 4x distances to the Moon.

1 Like

Can we please finally get rid of the imperial system in the US? It wan’t such a big deal in Ireland, why would it be in the US…

Funny that, I often wait in the GPU/Graphic card do complete.

1 Like

To all of you who are referring to the superior computational performance of the human brain based on the number/layout of the neurons:

Please consider, that the human brain is a product of a trial-and-error based process, but computers are products of literally generations of engineering. While certain parts of the human brain is capable of complex ballistic computations needed to calculate the flight of a non-spherical ball through the windy air, those parts can’t be used to perform similar computations to determine the course of a guided missile. While the human brain is capable to store a life length of audiovisual information in an associative manner, it fails to recall where did you put those damn keys seconds ago.

Please also consider that the human brain has no fast access interface. Our fastest input port is our vision and it only has the throughput of 10 MBps, not to mention our poorly designed output ports. I definitely can type faster than I can speak , but I can barely type 120-150 words (est. 1-1.5 kb) a minute (25 Byte/s).

1 Like

Probably because the US has 46x more people and – even taking Alaska out of the mix – 96x more land.

Something as simple as replacing all those speed limit and “distance to city” signs would be incredibly expensive, for no appreciable gain.

Yes. This is one of those areas of which many programmers are oblivious. The other is the extreme that so many go to in order to avoid these fetch operations. IE, caching everything and then looping through it for every operation. The problem is that looping through 100,000 records in memory is more expensive than reading one from disk.

This could lead to a lot of other discussions that would bore everyone if I got into it. Suffice to say that reading data is not the only operation you need to be aware of. Stay away from extremes, there is no one shot solution. (IE, cache everything, or put everything in the DB, or everything on the disk). You need to use the best solution for your problem and it takes thinking.

The conversion factor from miles to kilometers is less than 2, which is often less than the error term on many astronomical distances. When you see X miles just think between X and 2X km since we’re all familiar with multiplying by 2. :slight_smile: Actually, 1.5 is a decent approximation so to get a closer estimate go for halfway between X and 2X km.

2 Likes

I think both statements are true. The computer is much faster and much more accurate but cannot “make the the most trivial common sense deductions” on its own.

This discussion reminds me of a quote from John Cook’s blog…

http://www.johndcook.com/blog/2013/01/24/teaching-an-imbecile-to-play-bridge/

This is going to come off as super-science-fiction’y, but I really can’t help but think about the Matrix when I read this post.

To heck with “we’re too slow for computers”, I wanna get to the future where we can plug into the computer and live thousands of lifetimes in a single minute. Just imagine what that could mean for dying patients to be able to live ‘forever’, for scientific research, and for whole fields to just be able to spend seconds of real world life to gain decades of experience.

Heck, we’ve already got Spock’s tablet, and that seemed rather futuristic back then. I love imagining a future where we have all sorts of things we once thought could solve the world’s problems… just so we can take them for granted

Think of it as a stimulus package. It can’t be more expensive then the huge sums we throw at the banking sector. This may actually end up in the hands of normal people.

1 Like

I agree that my numbers are a broad benchmark for “how much better is disk performance on SSD vs HDD overall” versus specific to latency. The test you cited is kind of an extreme load test though, and won’t reflect average latency of a typical request.

  • 4,200 rpm – 7.14 ms
  • 5,400 rpm – 5.56 ms
  • 7,200 rpm – 4.17 ms
  • 10,000 rpm – 3 ms
  • 15,000 rpm – 2 ms

One thing I love about SSDs is that they are attacking the worst-case performance scenario, when information has to come off the slowest device in the computer, the hard drive. So SSDs are reducing the variability of requests for data massively:

Without SSD:
0.9 ns → 10 ms (variability of 11,111,111× )

With SSD:
0.9 ns → 150 μs (variability of 166,667× )

So that’s an improvement in overall performance variability of 66×!

I think you missed a crucial point in your story : us humans live in continuus time, while computers ‘live’ in discrete time.
For us, the distance between words is possibly infinite, we could split that second in a million parts and the cells in our body would still be doing something in that milionth of a second, something ‘decisional’ to push a thought or an action forward.
For computers everything is finite, from one instruction to the next. Between instructions there is only emptiness. And some electrical current going from a NAND gate to another.

And brains just have electrical current going from one neuron to another - not really a fundamental change there (brains are essentially what we call an analogue computer; we can build those using OpAmps, but building one on the scale of even a common standard cpu would be a daunting task)

What I find difficult to follow is the concept that a computer will, just because it can do basic math faster, somehow “think” faster as an AI than a human does or can. Early AIs will almost certainly think much slower than humans; as that improves, eventually they will think as fast if not faster than humans. But so MUCH faster? there is no reason why they or we would ever build a computer that can think an order of magnitude faster than a human, simply because the speed of thinking isn’t something of major benefit to an entity outside of “fast enough to hold an intelligent conversation with other entities”; why double the speed at which a computer can think, when you could use that additional computing power to give it better access to the pool of knowledge, faster (but “dumb”) computation, space to run hardcoded routines to do the day-to-day stuff (which to be fair is what the brain does too; when you walk, you don’t consciously decide how to move each leg and how to balance on your foot while your center of gravity shifts; that’s taken care of by learned routines, “reflexes” in human terms, which can operate without you needing to waste the fairly limited supply of thought on it) and so forth.

I can certainly see how, in order to do research in pure disciplines, you could run an AI at a high multiple of human thought speeds, and have it communicate in a high-speed network with other minds working on similar problems; however, for day-to-day usage, there would be little point in paying the computing and power costs to process a million times faster than a human, if the world you live in is plodding along at human speeds.

The time for reboot (5 minutes?) seems a bit off by todays standards. A dual core i2 booting Win 8.1U1 from WD Black, fully kitted out with service laden apps, such as Visual Studio, Office 365, Creative Cloud and multiple SQL Server instances takes about 30s to login screen and about 1 minute to Desktop (and the most significant delay in this seems to be establishing the wireless Internet connection). I don’t think I have ever seen a PC take 5 minutes outside of trying to run a modern OS on the bottom tier of obsolete hardware.

These are great! Thanks for sharing them.

I think you misread the first number from Norvig’s article, though. He sais:

execute typical instruction: 1/1,000,000,000 sec = 1 nanosec

which isn’t the same as saing that 1 CPU cycle = 1 ns.

Before I finally got my SSD a full reboot took ~20 minutes. And yeah, I might not get workstation hardware but it’s a nice desktop.

The old spinning hunk of metal hard drive is definitely an evil that most who have moved to SSD don’t even remember the pain of enduring.

The last time we tried to switch to metric, people actually shot at the metric signs.

The above Internet times are kind of optimistic

I think the times given are nominal one-way times; you have to double them to get round-trip times.

California to US east coast is currently about 80 ms round-trip (so nominally 40 ms each way).

California to UK is currently about 160 ms round-trip (so nominally 80 ms each way). (See data below.)

In fact, the times given must be nominal one-way times, because claiming a 40 ms round-trip time between SF and NYC would be faster than the speed of light in fibre.

An interesting observation emerging from this fact is that parts of the Internet have already got within less than a factor of two of the theoretical optimum predicted by Einstein. Not many human endeavours can make that claim. We may continue to lower Internet latency, but we’ll never see a 10x improvement on the SF-NYC round-trip time.

Here’s something I wrote on this topic about 18 years ago: http://stuartcheshire.org/rants/Latency.html

Stuart Cheshire

— lcs.mit.edu ping statistics —

% ping -c 10 lcs.mit.edu
PING lcs.mit.edu (128.30.2.121): 56 data bytes
64 bytes from 128.30.2.121: icmp_seq=0 ttl=45 time=74.269 ms
64 bytes from 128.30.2.121: icmp_seq=1 ttl=45 time=74.198 ms
64 bytes from 128.30.2.121: icmp_seq=2 ttl=45 time=75.381 ms
64 bytes from 128.30.2.121: icmp_seq=3 ttl=45 time=74.460 ms
64 bytes from 128.30.2.121: icmp_seq=4 ttl=45 time=74.152 ms
64 bytes from 128.30.2.121: icmp_seq=5 ttl=45 time=74.157 ms
64 bytes from 128.30.2.121: icmp_seq=6 ttl=45 time=74.213 ms
64 bytes from 128.30.2.121: icmp_seq=7 ttl=45 time=74.100 ms
64 bytes from 128.30.2.121: icmp_seq=8 ttl=45 time=74.103 ms
64 bytes from 128.30.2.121: icmp_seq=9 ttl=45 time=74.092 ms

10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 74.092/74.312/75.381/0.371 ms

cam.ac.uk ping statistics —

% ping -c 10 cam.ac.uk
PING cam.ac.uk (131.111.150.25): 56 data bytes
64 bytes from 131.111.150.25: icmp_seq=0 ttl=45 time=153.824 ms
64 bytes from 131.111.150.25: icmp_seq=1 ttl=45 time=153.980 ms
64 bytes from 131.111.150.25: icmp_seq=2 ttl=45 time=160.978 ms
64 bytes from 131.111.150.25: icmp_seq=3 ttl=45 time=154.426 ms
64 bytes from 131.111.150.25: icmp_seq=4 ttl=45 time=154.310 ms
64 bytes from 131.111.150.25: icmp_seq=5 ttl=45 time=153.809 ms
64 bytes from 131.111.150.25: icmp_seq=6 ttl=45 time=153.864 ms
64 bytes from 131.111.150.25: icmp_seq=7 ttl=45 time=154.885 ms
64 bytes from 131.111.150.25: icmp_seq=8 ttl=45 time=155.006 ms
64 bytes from 131.111.150.25: icmp_seq=9 ttl=45 time=154.196 ms

10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 153.809/154.928/160.978/2.056 ms
1 Like