Jump to content

Talk:Bremermann's limit

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

References

[edit]

~1.35×1047 bit/s/g

[edit]
No closed computer system, however constructed, can have an internal signal flow that exceeds mc2/h bits per second (here m is the total mass of the system, c the velocity of light in vacuum, and h is Planck's constant)...the numerical value of c2/h is 1.35×1047 (bits per second per gram)

Italics are per original source, namely p172 of "Complexity and Transcomputability", Hans J. Bremerman, in The Encyclopedia of Ignorance, edited by Ronald Duncan and Miranda Weston-Smith, 1977, Pergamon Press (reprinted by Wallaby/Pocket Books/Simon & Schuster/Gulf & Western 1978), p167-174. Author cites derivations in the following articles:

  • "A basic limitation of the speed of digital computers, W. W. Bledsoe, IRE Transactions on Electronic Computers, EC-10, 530, 1961
  • "Part 1: Limitations on Data Processing Arising from Quantum Theory", in "Optimization Through Evolution and Recombination", H. Bremermanm, in Self Organizing Systems, edited by M. C. Yovits, G. T. Jacobi, and G. D. Goldstein, Spartan Books, 1962
  • an "improved argument" in "Quantum Noise and Information", H. Bremermann, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, University of California Bress, 1967
  • "planned for publication" article: "Evolution and Optimization", in Biomathematics Lecture Note Series, 1978, Springer-Verlag

All those references seem noteworthy to me, though I'd like to verify the 1978 article.

Typing the string c^2/h into Google yields 1.35639293 × 1050 kg-1 s-1. If no dissent I will modify the article when I get around to it. --munge 07:58, 25 December 2006 (UTC)[reply]

Encryption speed

[edit]

Should the phrase "double the speed of encryption" be "double the cost of encryption" or "halve the speed of encryption"? 174.22.190.248 (talk) 03:41, 2 November 2009 (UTC)[reply]

Neither; it depends on the algorithm. Now fixed. --David-Sarah Hopwood ⚥ (talk) 23:50, 14 October 2010 (UTC)[reply]

...without increasing the time for encryption by more than a constant factor

[edit]

Maybe I'm stupid, but isn't 1072 years a constant? I think this needs clarification, or maybe a link or citation. --Nyelvmark (talk) 20:54, 9 August 2011 (UTC)[reply]

Assuming...

[edit]

Doesn't this explanation of cryptography assume P not equal NP? Maybe someone who knows more should add a note to this extent. — Preceding unsigned comment added by 128.42.216.63 (talk) 19:56, 27 August 2011 (UTC)[reply]

Outdated?

[edit]

This newer (2000) article reaches a slightly higher limit of 5.4258 ×1050 operations per second for a one-kilogram, one-litre "laptop". --Roentgenium111 (talk) 15:06, 29 August 2011 (UTC)[reply]

I added some references from the laptop paper. Anders Sandberg (talk) 17:53, 28 August 2012 (UTC)[reply]

Linear?

[edit]

Bremermann's law relates to linear (non-parallel) computation. If the "matter" doing the computation could perform computations in parallel, then a higher value can be reached. For example, each atom in 1 kg of hydrogen computes the force on it from each and every other atom. Given there are 10^30 odd atoms, this means that there must be 10^60 computations or so, each plank time, giving a total of 10^100 odd calculations per second (so long as your question can be posed in such a way that it can be answered by the gravitational interaction of particles). — Preceding unsigned comment added by 195.12.16.211 (talk) 14:32, 6 September 2011 (UTC)[reply]

Never be cracked?

[edit]

The article currently uses the phrase "...create an algorithm that could never be cracked by a brute-force search." I believe this is incorrect. It would be very, very, very unlikely, but it is possible that a brute-force search could find the key on the first guess. --Guy Macon (talk) 22:22, 9 April 2012 (UTC)[reply]

Merge with the Margolus–Levitin theorem page?

[edit]

Both deal with nearly the same topic. Using Lloyd00 for further references the pages might be merged into a joint page on ultimate speed limits for computation and logic gates. Anders Sandberg (talk) 17:53, 28 August 2012 (UTC)[reply]

That's speed of operations per joule, so I think it's a combination of this with Landauer's limit, which is number of operations per joule? The joint page is probably Limits to computation.

Comparisons with actual hardware

[edit]

Since this is included in Limits to computation, please include comparisons with the processing speed of actual modern hardware, like the bits per second of the world's fastest supercomputers divided by their mass. — Preceding unsigned comment added by 71.167.67.219 (talk) 20:45, 5 May 2013 (UTC)[reply]


I second that. but I would like to see it on a simple processor die, not counting the mass of the socket and heat spreader. just the silicon die. Otherwise the supercomputer case will get very very bad numbers, accounting for industrial cooling machinery, and all the wiring and heavy casing is no game. - anon. 24 Nov 2015. — Preceding unsigned comment added by 113.43.87.45 (talk) 07:19, 24 November 2015 (UTC)[reply]