Introduction

This may be a nitpick. Regarding the drain repair contractor example when it says if some of the variance owes to a few factors besides your two features. Here we have only one feature, the hours worked by the contractor, and two parameters, the hour rate and the bias, i.e. y = ax + b. Since the hourly rate is $100 and charges $50 to show up, a = 100 and b = 50.

1 Like

Great catch @roncato ! Would like to be a contributor for us?! Many thanks!

  • 1st priniciples model will specify an entire solution irrespective of data - examples are physic laws

  • Parametric model will assume a form of solution and fits it to the data (e.g. linear regression model assumes y = b0 + b1x then we fit it to the data to find b0 and b1)

  • Nonparametric will have no assumption about the solution form, this is totally data driven

1 Like

In Section 1.5, you write:

It is evident that random-access memory has not kept pace with the growth in data. At the same time, the increase in computational power has outpaced that of the data available. This means that statistical models need to become more memory efficient (this is typically achieved by adding nonlinearities) while simultaneously being able to spend more time on optimizing these parameters, due to an increased computational budget.

Can you kindly explain how statistical models can become memory efficient by ā€œadding non-linearitiesā€? Is memory efficiency the reason for adding non-linearities or a good side effect of adding non-linearity?

The following sentence is missing its citation.

The breakthrough deep Q-network that beat humans at Atari games using only the visual input .

It seems you intended to cite, but maybe it didnā€™t compile properly?

The error rate is the probability your model makes a mistake and is used to measure how good your model is on a classification problem. Hereā€™s a concrete example:

Suppose you trained a model to predict if an image is a cat or a dog. Suppose you take 100 images with known labels of cat and dog that your model never trained on. Letā€™s take the first image (suppose itā€™s a cat) and your model predicts dog. The error would be 1. Suppose the next image is a dog and your model predicts a dog. The error would be 0. You keep doing this and sum up the errors. Finally, divide by the total number of examples you predicted on (100 in this case). That is the error rate. Itā€™s a number between 0 and 1. Suppose the error rate is 0.25 or 25%. It means thereā€™s a 25% chance your model will make a mistake if you were to randomly select an image from your test dataset.

A few more things to know. For you model to make a decision, it means you had to pick a threshold. Neural networks for classification problems will produce a probability it thinks the image is a cat (say 30%). But this is just a probability! Next, you need to pick a threshold. So you might pick a threshold of say 50% and say that if the model emits a probability of 50% of greater, I will say the image is a cat. At this point you might wonder, is there a way to measure how good my model is without having to pick a threshold? In fact there is! Thatā€™s what ROC AUC is all about it. Which you can read more about here: Understanding ROC AUC (part 1/2). Introduction | by matt johnson | Building Ibotta | Medium

2 Likes

data is basical and computation is runing bt that, and elegant algorithm can help running better.

This is such a brilliant explanation. Thanks!!

This is amazing! Thank you so much! I did the exercises and itā€™s being very enjoyable.

Hey folks, just wondering if anyoneā€™s come across a resource for exercise solutions for each chapter in ā€˜Dive into Deep Learningā€™? Any leads would be awesome!

Hi Dan, did you get any answers, please share, I am also looking for resources.