Artificial intelligence: ‘We’re like children playing with a bomb’

Discussion in 'Politics' started by OddTrader, Jun 12, 2016.

  1.  
    Last edited: Jun 12, 2016
  2. uhh, yeah... so... if I never worried about climate change I don't have to worry about AI!
     
  3. achilles28

    achilles28

    Bill Gates and Elon Musk have toured around, warning of the catastrophic dangers of AI.

    Leading scientists speculate it could be here within 10 years. Basically like the Matrix or Terminator..... lol
     

  4. Our kind-heart scientists are trying to prove simply one important thing, again and again:

    Their clever experiments, including AI, Higgs boson, and perhaps a minimum scale of Man-made Big-Bang in the future, Will never destroy the whole human race, of course never never the universe!

    They also try, very keen, to prove Hawking is wrong! Heroically!

    And our smart politicians support them with huge finance! (They just cannot see the world poverty being a problem yet!)

    Do you know how much money and resources the world governments spend on cure of superbugs! ?

    I think Gates and Musk are correct, and doing the right thing!



    [​IMG]
    The God Particle could destabilise at high energy, threatening the universe, but the Cern particle accelerator is too slow to cause such a problem
     
    Last edited: Jun 14, 2016
  5. Last edited: Jun 14, 2016
  6.  
    Last edited: Jun 14, 2016
  7. " Stephen Hawking. Bill Gates. Elon Musk. When the world's biggest brains are lining up to warn us about something that will soon end life as we know it -- but it all sounds like a tired sci-fi trope -- what are we supposed to think? "

    Let's not forget each of these biggest brains surely has a strong team of almost equally biggest brains for consultation before formulating their opinions in public!

     
    Last edited: Jun 14, 2016
  8. "
    https://en.wikipedia.org/wiki/Murphy's_law
    Murphy's law is an adage or epigram that is typically stated as: Anything that can go wrong, will go wrong."
     
  9. " Instrumental goal convergence: Would a superintelligence just ignore us?

    There are some goals that almost any artificial intelligence might pursue, like acquiring additional resources or self-preservation. This could prove problematic because it might put an artificial intelligence in direct competition with humans.

    Citing Steve Omohundro's work on the idea of instrumental convergence, Russell and Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards". Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[1] "

    Just 2 cents:

    imo, I guess it would be possible that in the future most likely an average superintelligence can have much faster computing capability/speed, reasonably higher IQ, unlimited multiple talents/knowledge and unbeatable/emotionless EQ/focus, that our normal human-beings would be hard to compete with!
     
    Last edited: Jun 14, 2016
  10. " Valuing the Artificial Intelligence Market, Graphs and Predictions for 2016 and Beyond

    Daniel FaggellaMarch 7, 2016


    http://techemergence.com/valuing-the-artificial-intelligence-market-2016-and-beyond/

    Conclusion

    Prognostication – no matter how well informed – is risky business, especially in a field rife with buzzwords and sparse in concrete definitions. What fraction of spending on “big data” will imply the use of machine learning or other AI applications? What portion of “predictive analytics” inherently implies training AI algorithms, as opposed to merely permitting clearer forecasting and visualization? It’s hard to tell.

    Nonetheless, we’re of the belief that varied perspective is useful, and this summary article was intended to do jus that.

    If any conclusion can be drawn, it’s likely to be the fact that the terms and applications that define the “artificial intelligence” field are grey, and that definitions must be taken on a case-by-case basis.

    We might imagine that like other nascent technology fields, artificial intelligence will mature to the point of having a more robust and clear vendor ecosystem, and more defined terms to delineate between applications and uses. For now, if an executive or investor has interest in a particular domain or use of artificial intelligence, the first step in determining valuation and forecast would be to draw a proverbial “dotted line” around what “artificial intelligence” means for your purposes, and to draw the varied sources to get a mosaic of where things stand in your niche.


    [​IMG]

    [​IMG]
    "
     
    #10     Jun 15, 2016