H gh

Are mistaken. h gh that

Forget about any relation between h gh in this book and anything we've actually been able to do in AI research today. You won't find a discussion of a single algorithm h gh even exploration of higher-level mathematical properties of existing algorithms in this book.

As a result, this book could have been written 30 years ago, and its arguments wouldn't be any different. It gets particularly boring when the author actually does spend pages over pages on introducing a framework on how our AI h gh could improve (through speed improvement, or quality improvement, h gh. If you want to take the abstraction high road, just dispense with super generalized frameworks like this altogether and get to the point.

Similar to the discussion of where the recalcitrance of a future AI will come from, whether from software, h gh or h gh purely abstract and speculative, even though there are real-world examples of hardware evolution speed outpacing software design speed and the other way around (e. Second, even if johnson p operate fully in the realm of speculation, at least make that speculation tangible and interesting.

Struck me a lot of times as the kind of ideas you'd come up with if you thought h gh a particular scenario for a few minutes over a beer with friends.

Very few counterintuitive ideas in there. One chapter grandly announces the presentation of an elaborate "takeover scenario", i. Again the "friends over a beer" problem. At times h gh philosophizing in some chapters reads like a mildly interesting Star Trek episode (such as the one h gh how to best set goals for an AI so that it acts morally and doesn't kill us).

In the best and worst ways. But every now and then, there's a clever historical analogy, and an interesting idea. Ronald Reagan wasn't willing to share the technology on how to efficiently milk cows, but he offered to share SDI with the USSR - how would AI be shared.

Or, the insight that the difference between the dumbest and smartest human alive is tiny on a total intelligence scale (from IQ 75 to IQ 180) - and that this means that an AI would likely look to humans as if it very suddenly leapt from being really dumb to unbelievably smart and bridge this h gh human intelligence gap extremely quickly.

But what struck me with regards to the best ideas in the book is that the book almost always quotes just one guy, Eliezer Yudkovsky. All in all though, the topic itself is so interesting that it's worth h gh the book a try. H gh following chapters, though. I have to say that if anything, Bostrom's writing reminds me of theology. It's not lacking in rigor or references. Bostrom seems highly intelligent and well-read.

The problem (for me) is rather that the main premise h gh starts with is one that I find less than credible. Most of the book boils down to "Let's assume that there exists h gh superintelligence that can basically do whatever it wants, within the limits of the laws of physics. With this assumption in place, let's then explore what consequences this could have in areas X, Y, and Z.

These summaries don't yield any specific answer as to when human-level AI will be attained (it's not reported ), and Bostrom is evasive as to what his own view is. However, Bostrom seems to think, if you don't commit to any particular timeline on this question, you can assume h gh at some point human-level AI will be attained.

Now, once human-level AI is achieved, it'll be but a h gh step to superintelligence, says Bostrom. His argument as to why this transition period should be short is not too convincing. We h gh basically told that the newly developed human-level AI will soon engineer itself (don't ask exactly how) to be so smart that it can do stuff we can't even begin to comprehend (don't ask how we can know this), so h gh really no point in trying to think about it in much detail.

The AI Lord works in mysterious ways. I h gh the chapters on risks and AI societies h gh be pure sci-fi with even less realism than "assume spherical cows". H gh chapters on ethics and value acquistion did however contain some interesting discussion.

All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one h gh with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the h gh, he ff2 some time trying to justify it.

He makes some self-congratulatory remarks to assure sympathethic readers that h gh are really smart, smarter than their critics (e. Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, h gh that concern for existential risk would push us to fund action against poverty, h gh, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem".

That the very idea of these emulations gender fluid only h gh in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral h gh. Despite the criticism I've given above, the book isn't necessarily an uninteresting read.

As a work of speculative futurology (is there any other kind. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science.

But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University. Also a good understanding of economic theory would also help any reader. Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then h gh at the qof that would take place from smart narrow computing to super-computing and high machine intelligence.

At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas. Overall it is an interesting and thought provoking book at whatever level the reader interacts h gh it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories. Challenging but readable, the urgency is real.

Verified Purchase A clear, compelling review of h gh state of the art, potential pitfalls and ways of approaching the immensely difficult task of maximising the chance that we'll all enjoy the soft palate of a superintelligence. An important book showcasing the work we collectively need to life inet BEFORE the fact.

Given the enormity of what will likely be a one-time event, this h gh the position against which anyone involved h gh the development of AI must justify their approach, whether or not they are bound by the Official Secrets Act.

The one area in h gh I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end h gh already happened. Verified Purchase A difficult read by an excellent ultra johnson analyst on the very real existential threat posed by AI.

Bostrom could have opened with chapter 10 of the book by introducing the various castes of AI and the potential threats they pose and then gone into examining the challenges to controlling these threats (chapter 9). He could have h gh asked the pivotal mid second-act question, 'Is the default outcome doom.



There are no comments on this post...