Difference between revisions of "DeepLearning"

From Hawk Wiki
Jump to: navigation, search
Line 1: Line 1:
===Deep Learning===
+
==Deep Learning==
==Numpy/Python tricks==
+
===Numpy/Python tricks===
 
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use:
 
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use:
 
<pre class="brush:python">
 
<pre class="brush:python">
 
X_flatten = X.reshape(X.shape[0], -1).T      # X.T is the transpose of X
 
X_flatten = X.reshape(X.shape[0], -1).T      # X.T is the transpose of X
 
</pre>
 
</pre>
 +
 +
===Common Steps For Data Pre Processing===
 +
Common steps for pre-processing a new dataset are:
 +
 +
*Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
 +
*Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
 +
*"Standardize" the data

Revision as of 02:33, 8 January 2018

Deep Learning

Numpy/Python tricks

A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗∗ c ∗∗ d, a) is to use:

X_flatten = X.reshape(X.shape[0], -1).T      # X.T is the transpose of X

Common Steps For Data Pre Processing

Common steps for pre-processing a new dataset are:

  • Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
  • Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
  • "Standardize" the data