April 30, 2015

Iceland Creates DNA map of the "Entire Nation"

 Scientists in Iceland have produced an unprecedented snapshot of a nation's genetic makeup, discovering a host of previously unknown gene mutations that may play roles in ailments as diverse as Alzheimer's disease, heart disease and gallstones.

April 29, 2015

China Plans to build Solar Space Station

A random space station illustration

China is planning to build a huge Solar Power Station 36,000 kilometers above the ground in an attempt to solve the increasing energy crisis, greenhouse effects and smog problems.

The power station would be a super spacecraft on a geosynchronous orbit equipped with huge solar panels. The electricity generated would be converted to microwaves or lasers and transmitted to a collector on Earth.

According to Wang Xiji, an academician of the Chinese Academy of Sciences and an International Academy of Astronautics member, the space power station would be really huge, with the total area of the solar panels reaching 5 to 6 sq km. That would be equivalent to 12 of Beijing's Tian'anmen Square, the largest public square in the world, or nearly two New York Central Parks.

The electricity generated from the ground-based solar plants fluctuates with night and day and the weather, but a space generator can collect energy 99 per cent of the time.Also, Space-based solar panels can generate ten times as much electricity as ground-based panels per unit area, says Duan Baoyan, a member of the Chinese Academy of Engineering.

It's also of great strategic significance as whoever gets hold of the technology first could occupy the future energy market. China plans to build this space station by year 2020.

Lamborghini Veneno - Most Expensive Collector's Item

An open racing prototype with an extreme design and breathtaking performance, Lamborghini Veneno is a Limited Production super car based off of the Lamborghini Aventador. Built to celebrate it's 50th anniversary, Lamborghini stated that there are going to be only 9 of Veneno released ever. One of these cars, dubbed Car Zero, is to be placed in Lamborghini's Museum as display vehicle.

April 28, 2015

Bloodborne Game Review

If you guys are a fan of Action, Role-playing and horror, then this game is absolutely meant for you. Bloodborne developed by From Software is a game that is unconventional in every way you see it.
Warning: Bloodborne is quite difficult, very playable, but does not dumb down the action. You will die a lot, however, that is one of the best parts of the game. Read on to find out more...

April 27, 2015

Aston Martin DB11 Concept Car

What is known more firmly is the engineering basis for the new car. It will use an all-new platform, upgrading Aston Martin's VH aluminium architecture to something called VH500 and pepped up with the latest modules of everything: new suspension and transmission updates, the carte-blanche electrical systems such as sat-navs and, we hear, a smattering of lightweight composites.

A3W Motiv Concept MotorBike

Now guys, This is another good concept I have seen around. This one is Trike Bike i.e. it has 3 tyres which are all part of basic design and not a modification.

French designer Julien Rondino’s three-wheeled motorcycle concept – the A3W Motiv.

We are waiting for more details on the bike. But for now, what we know is that the bike has been designed around KTM’s 999cc LC8 v-twin. The chassis seems to be a mix of cast aluminium and steel tube sections and the bike is packed with interesting bits – hub-centre steering, adjustable ergonomics and Buell-style perimeter brakes.

Here is the Link to an album of A3W Motiv's Pics.

April 26, 2015

Acura NSX comes up with really cool info...



At the Society of Automotive Engineers 2015 World Congress, Acura spilled some new details about the upcoming next-generation NSX. Befitting the audience, it was all very techno-nerdy.

April 24, 2015

The Suzuki Crosscage Concept MotorBike


Suzuki Crosscage Concept
The Suzuki Crosscage fuel celled motorcycle is a hydrogen powered prototype that exists now. Suzuki is putting the Crosscage through road testing with the goal of commercialization in less than 5 years. This joint venture of Suzuki and Intelligent Energy is really going to define Style with Clean fuel.

April 23, 2015

Chevy's Robo-Car Concept is Switchblade on wheels

The Chevrolet-FNR is unlike any self-driven car or concept we've ever seen.

GENERAL MOTORS HAS a vision of motoring in 2030, when autonomous cars free us from the tyranny of the commute, and it’s … interesting.

August 18, 2013

Deep Learning


Deep Learning


When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain


There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.


Some of today’s artificial neural networks can train themselves to recognize complex patterns.


Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.


WHY IT MATTERS

Computers would assist humans far more effectively if they could reliably recognize patterns and make inferences about the world.

Breakthrough
A method of artificial intelligence that could be generalizable to many kinds of applications.

Key Players
• Google
• Microsoft
• IBM
• Geoffrey Hinton, University of Toronto