Sunday, February 2, 2020

TIMELINE MYTHS




                      TIMELINE MYTHS




The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.

One popular myth is that we know we’ll get superhuman AI this century . In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century . Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956 . The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible . However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know . For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.



Narrow Artificial Intelligence 



Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had "significant societal benefits and have contributed to the economic vitality of the nation," according to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration.

A few examples of Narrow AI include:

Google search

Image recognition software

Siri, Alexa and other personal assistants

Self-driving cars

IBM's Watson

HOW DOES ARTIFICIAL INTELLIGENCE WORK



HOW DOES ARTIFICIAL INTELLIGENCE WORK.........🧐🤔



Can machines think? — Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?"

Turing's paper "Computing Machinery and Intelligence" (1950), and it's subsequent Turing Test, established the fundamental goal and vision of artificial intelligence. 


At it's core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.

The major limitation in defining AI as simply "building machines that are intelligent" is that it doesn't actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is "the study of agents that receive percepts from the environment and perform actions." (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI:

Thinking humanly

Thinking rationally

Acting humanly

Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally." (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together."

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

While addressing a crowd at the Japan AI Experience in 2017,  DataRobot CEO Jeremy Achin began his speech by offering the following definition .

"AI is a computer system able to perform tasks that ordinarily require human intelligence... Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules."


HISTORY OF ARTIFICIAL INTELLIGENCE



   HISTORY OF ARTIFICIAL INTELLIGENCE 


Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle's development of the syllogism and it's use of deductive reasoning was a key moment in mankind's quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it to day spans less than a century. The following is a quick look at some of the most important events in AI.

1943

Warren McCullough and Walter Pitts publish "A Logical Calculus of Ideas Immanent in Nervous Activity." The paper proposed the first mathematic model for building a neural network.

1949

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they're used. Hebbian learning continues to be an important model in AI.

1950

Alan Turing publishes "Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.

Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.

Claude Shannon publishes the paper "Programming a Computer for Playing Chess."

Isaac Asimov publishes the "Three Laws of Robotics." 

1952

Arthur Samuel develops a self-learning program to play checkers.

1954

The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.

1956

The phrase artificial intelligence is coined at the "Dartmouth Summer Research Project on Artificial Intelligence." Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it to day .

Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program .

1958

John McCarthy develops the AI programming language Lisp and publishes the paper "Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do . 

1959

Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.

Herbert Gelernter develops the Geometry Theorem Prover program .

Arthur Samuel coins the term machine learning while at IBM .

John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

1963

John McCarthy starts the AI Lab at Stanford.

1966

The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.

1969

The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.

1972

The logic programming language PROLOG is created.

1973

The "Lighthill Report," detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.

1974-1980

Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year's "Lighthill Report," artificial intelligence funding dries up and research stalls. This period is known as the "First AI Winter ".

1980

Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first "AI Winter ".

1982

Japan's Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

1983

In response to Japan's FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

1985

Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.

1987-1993

As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the "Second AI Winter." During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.

DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.

1991

U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

1997

IBM's Deep Blue beats world chess champion Gary Kasparov

2005

STANLEY, a self-driving car, wins the DARPA Grand Challenge.

The U.S. military begins investing in autonomous robots like Boston Dynamic's "Big Dog" and iRobot's "PackBot."

2008

Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

2011

IBM's Watson trounces the competition on Jeopardy!. 

2012

Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in breakthrough era for neural networks and deep learning funding.

2014

Google makes first self-driving car to pass a state driving test.

2016

Google DeepMind's AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.



Machine Learning & Deep Learning 



     Machine Learning & Deep Learning 



Much of Narrow AI is powered by breakthroughs in machine learning and deep learning. Understanding the difference between artificial intelligence, machine learning and deep learning can be confusing. Venture capitalist Frank Chen provides a good overview of how to distinguish between them, noting: 

"Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques."

Simply put, machine learning feeds a computer data and uses statistical techniques to help it "learn" how to get progressively better at a task, without having been specifically programmed for that task, eliminating the need for millions of lines of written code . Machine learning consists of both supervised learning (using labeled data sets) and unsupervised learning (using unlabeled data sets). 

Deep learning is a type of machine learning that runs inputs through a biologically-inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go "deep" in its learning, making connections and weighting input for the best results.

Artificial General Intelligence



Artificial General Intelligence



The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for AGI has been fraught with difficulty.

The search for a "universal algorithm for learning and acting in any environment," (Russel and Norvig 27) isn't new, but time hasn't eased the difficulty of essentially creating a machine with a full set of cognitive abilities.

AGI has long been the muse of dystopian science fiction, in which super-intelligent robots overrun humanity, but experts agree it's not something we need to worry about anytime soon.

Artificial intelligence (AI) is wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.


Sunday, January 19, 2020

INTERNET


                  INTERNET




The INTERNET CONSISTS OF TENS thousands of computer network linked together to form one huge global network, are loving any computers on one network communicate with any computer on another. The two main service used on the Internet are email and the World Wide Web Email allow text message to be sent - along with attached computer files, images, videos clips, for example - to other computer on the Internet . The Web consists of millions of pages made up of digital files that are store on computer across the world and can be viewed using a Web browser . The Web also provides interactive access to various services, for example banking and shopping.

INKJET PRINTER


                       Inkjet Printer 



INKJET PRINTER EXPEL ink droplets from hundreds of tiny jets, or nozzles,  onto a medium, such a paper, to printer and image. each droplet corresponds to a single pixel ( Picture element).  Black-and-white  printers use only black ink, while colour printer over print combinations of a printing colours (cyan,magenta, yellow and black) to create a full colour range. the printhead containing the nozzle moves sideway across the paper, creating a line of pixel, before the paper moves slightly forward  show the next line  can be printed. Two basic methods are used to  eject ink : thermal,  in which ink is  heated to form an  expanding bubble that expels a droplet from the nozzle ,and piezoelectric, in which an electric current expands a crystal causing it to push out the ink droplet . The printer show here can print Digital Photographs directly from a memory card.

Friday, January 17, 2020

TABLE COMPUTER



                     TABLE  COMPUTER


By The Early 1990s electronics circuitry had been miniaturized to such an extent that it was possible to make small handheld computing devices.

                 The first of these was the Personal Digital Assistant (PDA), which offered future including an address book, calendar and notepad. In recent year, PDAs have been overtaken by smartphone with Internet and email access. Related product is the ebook reader, we store books in the digital form of users "electronic papers" to mimic the appearance of ink on real paper. An e-book reader no bigger than you thin paperback can store several thousand digital books in its memory. The most reset handheld computing device is the tablet computer. This looks like a thing flat display, but it is actually a complete computer.

                       Tablet computers are typically controlled by a touch-sensitive screen and have a wireless link to other computers and the Internet. They run software application or apps downloaded from the internet. The most popular tablet computer currently is the Apple iPad. It has a multi touch interface that enables its screen to detect the movement of finger tips.

              As well as selecting options and apps by touching the screen, images can be enlarged or shrunk by moving fingertips apart or together on the screen.

PERSONAL COMPUTER




  


               

     PERSONAL COMPUTER

       

               Personal Computers (PCs) falls into two main types IBM comfortable PCs, known simply as PCs, and Apple Macintosh PC's known as "Macs". Date difference in the way files and programs,and the user access to them,are organized, and programs must be tailored for each type .However, in in most other respect PC's and Macs have much in common. Both Contain micro chips, or Integrated circuit, that store and process data. The "brain" of any PC is a chip known as central Processing Unit(CPU), which perform mathematical operation in order to run program instruction and receive , Store, and output data.

           The most powerful personal computer CPUs   today can perform more than a billion calculation a second. Data can be a input via CDs,USB memory sticks and other storage media. Highly portable Laptops and network PCs are also in widespread use. Most PCs are able to communicate with many other devices, including digital cameras and Smartphones.