Go to end of page
Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments.
Page 6 of 6 pages
Wow; that's a dishonest description of the link. He's upset about the law, sure. But as per the link, he actually called for Cantor to step down because Cantor was threatening Republicans who wouldn't support it.
Echoing Lewis, Daily Caller co-founder Tucker Carlson said during an appearance on Fox News that the full emails suggest Woodward "hyped" the claim that he had been threatened.
Daily Caller co-founder Tucker Carlson
The last time I ever saw a bunch of wingnuts fighting among themselves like this was in the bad old days of the New Left, when SDS split into the PL, RYM2 and Weathermen factions, each one trying to leapfrog over the others in ratcheting up their pidgin revolutionary rhetoric.
conservatives had seized on Woodward's initial story because it "confirmed our suspicion about the Obama Administration's 'Chicago-style' of politics."
Doesn't this kind of talk contradict the story line that the sequester was all President Obama's idea (and therefore his fault?)
You should read the review. It isn't about Kurzweil's work on pattern recognition but instead Kurzweil's claims that pattern recognition is the key to writing a new theory of the mind.
but it's best to respond to that thoughtfully instead of pronouncing the people who know their #### as haters.
edit: the link I posted to "Accelerating Change" has some useful criticisms of that [Kurzweil's and others'] idea, but none of those criticisms involve charges of 'kookery'.
Darwinism is obsoleted by tehnology the same way that airplanes make gravity obsolete.
Darwin is obsolete
In general this is a silly thing to say.
I am not willing to say he is a kook,...
For one, he really doesn't understand what he is talking about.
From various readings of mine (as I've mentioned before, my primary focus as a history grad student was the New Left), I think it's pretty safe to say that Bernardine was well aware of her effect on males.
ot sure what Frum's talking about here. The sequester is a good thing¹; not clear why he thinks anyone would want to give Obama credit for it.
¹ Well, a good start, I mean.
It's a deliciously amusing thing though, because that 'good thing'?
Well, David, your party is going full tilt (the "Obamaquester") putting it in Obama's hands.... so... will you be needing the WH mailing address to send Obama a thank you card for "his" sequester?
@419: Morty, Darwin is obsolete:
The wiki has a very good summary in its well-written page on Accelerating [Evolutionary] Change.
According to Kurzweil, since the beginning of evolution, more complex life forms have been evolving exponentially faster, with shorter and shorter intervals between the emergence of radically new life forms, such as human beings, who have the capacity to engineer (intentionally to design with efficiency) a new trait which replaces relatively blind evolutionary mechanisms of selection for efficiency. By extension, the rate of technical progress amongst humans has also been exponentially increasing, as we discover more effective ways to do things, we also discover more effective ways to learn, i.e. language, numbers, written language, philosophy, scientific method, instruments of observation, tallying devices, mechanical calculators, computers, each of these major advances in our ability to account for information occur increasingly close together. Already within the past sixty years, life in the industrialized world has changed almost beyond recognition except for living memories from the first half of the 20th century. This pattern will culminate in unimaginable technological progress in the 21st century, leading to a singularity. Kurzweil elaborates on his views in his books The Age of Spiritual Machines and The Singularity Is Near.
Guys like Kurzweil - and other physicists, philosophers, and computer scientists postulating about biology generally make fools of themselves because they don't do any cursory background reading in the field, let alone strive to understand it's chalenges (e.g., parameter estimation)
Well, if you can get an outcome that you like, that infuriates 85% of the population AND blame it on your enemy, why wouldn't you?
Warden spoke during the debate on a bill introduced by Derry Republican Rep. Frank Sapareto that would reduce simple assault from a misdemeanor crime to a violation-level offense in any case of “unprivileged physical contact” that “does not result in harm or injury.” -Ben Leubsdorf, The Concord Monitor
Darwin is obsolete
Post 460 pretty much gave the proper response to this.
Where's Morty get the money to pay for all this? It's not like his consciousness is particularly valuable, I'm assuming, so with any scarcity he's going to have to pay for it. Until we get to a world where replicating everything is cheap and easy, this isn't really a viable goal.
The sequester is a good thing¹; not clear why he thinks anyone would want to give Obama credit for it.
¹ Well, a good start, I mean.
not sure what Frum's talking about here... not clear why he thinks anyone would want to give Obama credit for it.
We know you mean re: Darwinism, JC. It's still an asinine thing to say.
Humans already control natural selection in themselves and various plant and animal cultivars, and have for 10,000 years.
Yet you trash that ratio for others.
Yeah, I can see why.
It's not difficult: Once natural selection ceases to operate--once we control our evolution--Darwin's theory of natural selection as it applies to us and dictates the course of our evolution, is obsolete.
Since I put DMN on ignore, I don't have to wonder any more. The signal-to-noise ratio in these threads has noticeably improved for me.
I'll take "Useless Pieces of Crap I Worked With in Little Rock" for $100, Alex.
as to being not clear, DMN writes as if he is truly oblivious to the fact that the sequester is [currently] unpopular-and playing it as a straight man- is he oblivious, or is he merely attempting to be dryly humorous?
If we become immortal, that would be a radical, cataclysmic change. We wouldn't then be driven by the drive for survival. That could result in a change in basic character, it can be mooted, but I doubt it would be instantaneous, or even fast. But if that is so, moralistic terms would become anachronistic also, wouldn't it?
Funny that you should speak of this speed of change outstripping of Darwin (whatever's happening it's still organisms responding to environmental pressure, though) on our way careening towards a Singularity. It's reminiscent to me of the way Dawkins writes of "replicators" and "vehicles" in The Selfish Gene (and how they, too, can be in conflict), and the way natural selection created and engineered "survival machines."
[There is, for example] predictability through inheritance. If a digital intelligence is created directly from a human template (as would be the case in a high-fidelity whole brain emulation), then the digital intelligence might inherit the motivations of the human template.
The agent might retain some of these motivations even if its cognitive capacities are subsequently enhanced to make it superintelligent. This kind of inference requires caution. The agent’s goals and values could easily become corrupted in the uploading process or during its subsequent operation and enhancement, depending on how the procedure is implemented.
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal.
The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.
Is a billion euros enough to understand the human brain? The Human Brain Project thinks it’s a good start, and evidently the European Commission agrees. On January 28, the Human Brain Project was one of two projects to be awarded a billion in backing from the European Commission’s Future and Emerging Technologies (FET) Initiative.
Henry Markram, the project’s founder and co-director, hopes that over the next decade the project’s consortium of 80+ institutions will use up to an annual $100 million in funding to build a complete digital model of the human brain.
The better we know the brain, the better we can diagnose and treat neurological disease, and maybe—in the greatest feat of natural reverse engineering to date—the better we can build computers and software as flexible, powerful, and efficient as the brain itself. At least, that’s the goal.
Markram says, “It’s an infrastructure to be able to build and simulate the human brain, objectively classify brain diseases, and build radically new computing devices.” See the following HBP video for more:
IBM and DARPA’s SyNAPSE recently completed a 100 trillion synapse simulation based on the connections in a macaque brain. Spaun is a working (albeit very simple) cognitive computer. Meanwhile, Ray Kurzweil’s latest book, How to Create a Mind, outlines his ideas on how to reverse engineer the brain—his new post at Google as Director of Engineering could result in some interesting brain-related projects. And not to be outdone by the EU, the Obama administration is expected to announce the details of a ten year, multi-billion dollar project to map the human brain in the coming weeks.
Governments and academic institutions seem comitted to throwing everything but the kitchen sink at the problem of the brain. It’s reminiscent of Apollo or the Human Genome Project. And arguably, giant lump sum public investments (and the inevitable international competition that goes with them) are needed to jumpstart such massive scientific endeavors.
But politics and public funding are fickle beasts. Equally important are the new neural networks—80+ institutional collaborators in the Human Brain Project alone—forming in the global brain. Whether or not we reach the lofty goal of fully modeling the brain in a decade, we’ll certainly have learned and begun to apply much from the process.
Abstract: Exascale high-performance computing systems are projected to become a reality by the end of the decade. Supercomputers of this size are anticipated to have considerable societal impact, by transforming scientific understanding of complex systems including global climate, brain neurophysiology, and fusion energy. Escalating computational performance and interconnection bandwidth significantly beyond today's Petaflop systems will require deployment of hundreds of millions of optical links across all length scales within the system architecture, for interconnection of racks, modules, and individual chips. This talk will describe the device-level research behind IBM CMOS Integrated Silicon Nanophotonic technology, which realizes monolithic integration of deeply-scaled high-speed optical circuits within the front-end of a standard CMOS process. This platform can provide a cost-effective path toward the low-power, massively parallel optical transceivers required for Exascale systems.
Morty--I'm sure I don't know Dawkins as well as you do--is The Selfish Gene the best place to start getting better acquainted?
Also, the sequester can't be unpopular since it hasn't had any effect yet
And if you are interested in evolution and Darwin's theories and views on evolution, DAwkins's The Greatest Show on Earth has been called the best popular presentation, although Jerry Coyne's Why Evolution is True (he keeps it as simple as his title) is very good and very quick (it's about 130 pages long).
It's called exponential growth for a reason,
I'm skeptical that computing speed's exponential growth can continue indefinitely. As with most examples of exponential growth, there are physical limits -- Planck's constant, atomic scales, the speed of light, etc. -- that will take effect at some point. Quantum computing looks to be only a piece of the answer.
It may be possible to use a black hole as a data storage and/or computing device, if a practical mechanism for extraction of contained information can be found. Such extraction may in principle be possible (Stephen Hawking's proposed resolution to the black hole information paradox). This would achieve storage density exactly equal to the Bekenstein Bound. Professor Seth Lloyd calculated the computational abilities of an "ultimate laptop" formed by compressing a kilogram of matter into a black hole of radius 1.485 × 10?27 meters, concluding that it would only last about 10?19 seconds before evaporating due to Hawking radiation, but that during this brief time it could compute at a rate of about 5 × 1050 operations per second, ultimately performing about 10 to the 32nd operations on 10 to the 16th bits. Lloyd notes that "Interestingly, although this hypothetical computation is performed at ultra-high densities and speeds, the total number of bits available to be processed is not far from the number available to current computers operating in more familiar surroundings.
As with most examples of exponential growth, there are physical limits -- Planck's constant, atomic scales, the speed of light, etc. -- that will take effect at some point. Quantum computing looks to be only a piece of the answer.
Well, first, i think some more operational definitions are in order. Let’s assume “human-level AI” means “capable of non-emotional reasoning and problem solving at the level of the average human in realtime”. Basically, what we currently understand as the functionality of the neocortex (Strong AI). I think the “in realtime” is important, because if we can run an AGI at 1/1000th the speed of a human brain (still quite a feat), we’re quite a few years from being able to interact with it.
Quite a few people picked 2030 [as the year of human level, Strong AI being developed], but i didn’t see any real reasoning behind the number other than increasing computing speeds. L Zoel touched on simulation, this seems like a good baseline for a pessimistic projection. Let’s use the Blue Brain project as our benchmark, since it’s the farthest along in true neuronal simulation (with interconnects and not simple point neurons).
Blue Brain can simulate a rat-level neocortical column (~10,000 neurons) in realtime on IBM Blue Gene/L supercomputer (36 TFLOPS). These are advanced neuronal simulations at the cellular level, including interconnects between neurons. A human neocortical column has ~50,000 neurons (varies of course). Assuming the complexity squares with increasing NCC’s (due to interconnects), 25x more computational power is required to simulate 1 neocortical column, roughly 1 petaflop, in the range of the fastest supercomputers today.
The human neocortex has between 2-5 million neocortical columns. This means that a zettaflop computer (1 million times more powerful than today’s fastest supercomputers) is required to run the blue brain simulation, in its current state, on the scale of a human neocortex.
Now, this is incredibly inefficient. We aren’t actually writing intelligence algorithms, just simulating the brain down the cellular level. A fellow from Sandia labs predicts that with a zettaflop computer, we could model the entire world’s weather patterns at a resolution of under 100m for 2 weeks. Clearly this is far beyond the scope of what 1 human brain is capable of, yet the hardware required to do both is identical. I think it speaks to the inefficiency of the simulation, and the potential for simplification of an AI model.
But even with this pessimistic outcome of AI, if the colloquial version of Moore’s Law holds, by 2030 we have the processing power to do this. Any other advances in actual AI algorithms (Jeff Hawkins’ Nupic software excels at the pattern recognition many here have mentioned, i think his HTM theory holds much promise) could speed things along. I think 2030-2050 is a sure thing if computers keep pace, and it looks like they will, to me.
Shrinking MOSFETs down to 16nm by 2016, 3d chip stacking, optical chip interconnects, self-assembling CNTFETS, graphene clock multipliers, these are all things being experimented with and tested now that don’t require any wildcard technologies (like quantum computing, single photon transistors, molecular computing, etc).
2030-2050 has my vote… [for the development of Strong AI at the level of the human brain].
Biology, Biochemistry & Genetics on the other hand is just beginning it's exponential explosion (e.g, cost of DNA sequencing is currently Supra-moore's law).
Btw, I've assumed the device McCoy is waving is a portable MRI (among other things), with real-time, high definition reading and diagnosis, natch. That means he's packing zettaflop power at least into a salt shaker.
I assume you could regrow organs quickly in TOS
Though in 2013 what we have is presumably generic skin or more probably a skin-like substance, and not one that tailors itself to mimic your own skin's DNA the way it surely would in 2450ish.
-MRI and PET and other non-invasive imaging, though the machines are much larger.
I assume you could regrow organs quickly in TOS (massive repairs were done from time to time), though I don't remember specific examples,
In Star Trek IV they were on present day earth and McCoy gave someone a pill that regrew their kidney(?).
In November 2011, a group of MIT researchers created the first computer chip that mimics the analog, ion-based communication in a synapse between two neurons using 400 transistors and standard CMOS manufacturing techniques.
In June 2012, Spintronic Researchers at Purdue presented a paper on design for a neuromorphic chip using lateral spin valves and memristors. They argue that the architecture they have designed works in a similar way to neurons and can therefore be used to test various ways of reproducing the brain’s processing ability. In addition, they are significantly more energy efficient than conventional chips.
Within the brain, neurons all have electrical voices, each singing out in harmony with millions of others, a complex choir of information processing from which emerges the crown jewel of the human being – the conscious mind.
As described in The Blue Brain Way: Creating ‘singing’ neurons, it’s necessary to first create the voices of neurons – i.e. create electrical models that are representative of the full diversity of electrical behaviors exhibited by neurons. These are the e-types. Secondly, it is necessary to transplant these voices into the correct ‘voice boxes’ or neuron morphologies to create me-types (morpho-electrical types).
We have to do that if we're going to aim at comprehensive theories of Being, but it also means we'll occasionally look like fools. The alternative is to theorize only narrowly, from subjects we've mastered.
Also, the sequester can't be unpopular since it hasn't had any effect yet; what's unpopular is the scare tactics being announced by Obama and reported uncritically by the liberal media, in which cutting an infinitesimal portion of the federal budget, will somehow cut every popular program but nothing unpopular.
Why are we limited to only theorizing? We have at our disposal the scientific method, right? If Kurzweil wants to be taken seriously, he can construct a model and experiment to validate his theories. Then his critics can review and duplicate, if possible, his findings. Short of that, its all just mental masturbation.
In what way does thinking involve processing a stimulus and categorizing it? When I am thinking about London while in Miami I am not recognizing any presented stimulus as London—since I am not perceiving London with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So pattern recognition cannot be the essential nature of thought. This point seems totally obvious and quite devastating, yet Kurzweil has nothing to say about it, not even acknowledging the problem.
You must be Registered and Logged In to post comments.
Login to Join (0 members)
Page rendered in 2.6997 seconds, 64 querie(s) executed