Skip to main content

5 AI advices you need to implement, from TODAY: DeepMind CoFounder

Data Science and Artificial Intelligence fans, this might be a good day for you. Google DeepMind Cofounder gives a teenage AI fan  pieces of advice, and I think you should know that too!

Some artificial intelligence specialists at organizations like Google and Facebook are currently acquiring more cash than venture financiers at Goldman Sachs and J.P. Morgan. 

These specialists additionally have the benefit of working in a field of technology that is ready to majorly affect the world we live in. 

Be that as it may, for some individuals, it's not clear how to approach landing a job in AI.

This week, 17-year-old Londoner Aron Chase asked Shane Legg — the chief scientist and cofounder of DeepMind, an AI lab acquired by DeepMind for a reported £400 million — for five pieces of advice for an AI enthusiast like himself.
"Hey Shane I’m currently 17 from London England and am very passionate about AI, also learning about in-depth human needs. What would be the 5 pieces of advice and tips you would give to a young person like me?"

To Chase's surprise, Legg replied, telling him to learn linear algebra well, calculus to an ok level, theory and stats to a good level, the basics in theoretical computer science, and how to code well in Python and C++. He also encouraged him to read and implement machine learning papers, and to "play with stuff!". 
Now, that's personally great for Artificial intelligence enthusiasts who want to begin their journey in this field. 
Personally, coding in C++ really essential because most of the frameworks are built upon this fast language. Be it Tensorflow, PyTorch, MlPack or anything at all, they all utilize the speed C++ offers. Moreover, the requirement of math is pretty obvious. (This is a statement which you will get a hang of the moment you enter your study of K Nearest Neighbors (In Machine Learning) or Deep Learning forward propagation (in Deep Learning).
As for getting the hang of implementing Machine Learning papers, I still have to do it myself, so can't say about that :P

Anyways, that's all for this post, Do pay heed to this advice!
Uddeshya Singh

Comments

Total Pageviews

Popular posts from this blog

Kaggle Dataset Analysis : Is your Avocado organic or not?

Hey readers! Today, allow me to present you yet another dataset analysis of a rather gluttony topic, namely Avocado price analysis. This Data set  represents the historical data on avocado prices and sales volume in multiple US markets. Our prime objectives will be to visualize the dataset, pre-process it and ultimately test multiple sklearn classifiers to checkout which one gives us the best confidence and accuracy for our Avocado's Organic assurance! Note : I'd like to extend the kernel contribution to Shivam Negi . All this code belongs to him. Data Visualization This script must procure the following jointplot  While a similar joint plot can be drawn for conluding the linearly exponent relations between extra large bags and the small ones. Pre Processing The following script has been used for pre processing the input data. Model Definition and Comparisons We will be looking mostly at three different models, namely ra...

Your help in Fashion : 7 layer CNN at your service (~92% accurate)

Hey Readers! Welcome to yet another post where I play with a self designed neural network. This CNN would be tackling a variant of classical MNIST known as Fashion MNIST dataset  . Before we start exploring what is the approach for this dataset, let's first checkout what this dataset really is. Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. ...

Data Science Tip : Why and how to Improve your training data.

Hi readers ,  There are heaps of good reasons why researchers are so focused on model designs, however it means that there are not very many assets accessible to control individuals who are centered around deploying machine learning underway. To address that, An ongoing talk at the gathering was on "the preposterous adequacy of preparing information", and I need to develop that a bit in this blog entry, clarifying why information is so imperative alongside some commonsense tips on enhancing it. As a feature of my investigation I work intimately with a great deal of researchers and item groups, and my faith in the intensity of information changes originates from the gigantic additions I've seen them accomplish when they focus on that side of their model building. The greatest boundary to utilizing deep learning in many applications is getting sufficiently high accuracy in reality, and enhancing the preparation set is the quickest route I've seen to accuracy upgr...