tag:blogger.com,1999:blog-55899610572319682572024-03-03T02:04:18.264+05:30TECH LOGICAnonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-5589961057231968257.post-14651417964434450312018-02-20T04:58:00.003+05:302018-02-20T04:58:49.738+05:30How to compute probabilities from list of boosted trees in xgboost<div dir="ltr" style="text-align: left;" trbidi="on"><br />
If number of trees == num of boosting rounds<br />
score = sum over predicted leaf values over all trees + 0.5<br />
prob = exp(score)/(1+exp(score)) or 1/(1+exp(-score))<br />
<br />
If number of trees == num of boosting rounds * number of class<br />
score_i for all class i = sum over predicted leaf values over all trees + 0.5 , for all class i<br />
prob_i for i = exp(score_i)/sum_i_(exp(score_i))<br />
<br />
References:<br />
<ul style="text-align: left;"><li><a href="https://stackoverflow.com/questions/39858916/xgboost-how-to-get-probabilities-of-class-from-xgb-dump-multisoftprob-objecti/40632862#40632862" style="font-family: HelveticaNeue; font-size: 12px;">https://stackoverflow.com/questions/39858916/xgboost-how-to-get-probabilities-of-class-from-xgb-dump-multisoftprob-objecti/40632862#40632862</a></li>
<li><a href="https://stackoverflow.com/questions/37193953/how-to-use-xgboost-r-tree-dump-to-compute-or-do-predictions?rq=1" style="font-family: HelveticaNeue; font-size: 12px;">https://stackoverflow.com/questions/37193953/how-to-use-xgboost-r-tree-dump-to-compute-or-do-predictions?rq=1</a></li>
<li><a href="https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf" style="font-family: HelveticaNeue; font-size: 12px;">https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf</a></li>
<li><a href="https://stackoverflow.com/questions/39858916/xgboost-how-to-get-probabilities-of-class-from-xgb-dump-multisoftprob-objecti" style="font-family: HelveticaNeue; font-size: 12px;">https://stackoverflow.com/questions/39858916/xgboost-how-to-get-probabilities-of-class-from-xgb-dump-multisoftprob-objecti</a></li>
<li><a href="https://stats.stackexchange.com/questions/245537/calculate-probabilities-from-xgb-dump-for-multisoftprob-objective-formula" style="font-family: HelveticaNeue; font-size: 12px;">https://stats.stackexchange.com/questions/245537/calculate-probabilities-from-xgb-dump-for-multisoftprob-objective-formula</a></li>
<li><a href="http://xgboost.readthedocs.io/en/latest/parameter.html" style="font-family: HelveticaNeue; font-size: 12px;">http://xgboost.readthedocs.io/en/latest/parameter.html</a></li>
<li><a href="https://www.youtube.com/watch?v=Vly8xGnNiWs" style="font-family: HelveticaNeue; font-size: 12px;">https://www.youtube.com/watch?v=Vly8xGnNiWs</a></li>
</ul><br />
<div><br />
</div></div>Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com16tag:blogger.com,1999:blog-5589961057231968257.post-9115545475150608612015-07-06T18:10:00.002+05:302015-07-06T18:10:33.592+05:30Extensive evaluation of different classifiers<div dir="ltr" style="text-align: left;" trbidi="on">
<br />TLDR: Random Forests is best thing common in both references.<br /><br />From abstract of first reference "<a href="http://jmlr.org/papers/volume15/delgado14a/delgado14a.pdf" target="_blank">Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?</a>"<br />
<br />"We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods). We use 121 data sets, which represent the whole UCI database (excluding the large-scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest(RF) versions."<br /><br />From second reference:<br />"With excellent performance on all eight metrics, calibrated boosted trees were the best learning algorithm overall. Random forests are close second, followed by uncalibrated bagged trees, calibrated SVMs, and uncalibrated neural nets. The models that performed poorest were naive bayes, logistic regression, decision trees, and boosted stumps. Although some methods clearly perform better or worse than other methods on average, there is significant variability across the problems and metrics. Even the best models sometimes perform poorly, and models with poor average performance occasionally perform exceptionally well."<br /><br />
The two references perform an extensive evaluation of different classifiers across datasets and across performance metrics. <br />
<br />
<a href="http://jmlr.org/papers/volume15/delgado14a/delgado14a.pdf" target="_blank">Do we Need Hundreds of Classifiers to Solve Real World Classifi cation Problems?</a><br />
<a href="http://icml2006.autonlab.org/icml_documents/camera-ready/021_An_Empirical_Compari.pdf" target="_blank">An Empirical Comparison of Supervised Learning Algorithms</a></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-84368693792691040702015-06-24T12:06:00.001+05:302015-06-24T12:07:37.537+05:30Idea of modeling non-linearity as learning distrtibution over function spaces in feedforward neural network<div dir="ltr" style="text-align: left;" trbidi="on">
I was thinking about relationship in graphical context of graphical models and feedforward neural network. On one hand, Feedforward Neural Network is a graph of deterministic
functions and on the other hand, Graphical models are graph of
dependence of random variable which are uncertain. Then I thought what if deterministic non-linearities can be replace with random process which generates functions and shared non-linearity can be inferred.<br />
<br />
An interesting idea would be to learn a distribution over function space(which will be used as non-linearity in Feedforward Neural Networks) jointly with backpropagation in an EM like fashion.<br />
<br />
To summarize, We want to replace the deterministic function with a learned function by modeling the distribution over function space and inferring the shared non-linearity in neural network. <br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com4tag:blogger.com,1999:blog-5589961057231968257.post-5543637359243417102015-06-07T05:50:00.001+05:302015-06-07T05:53:39.995+05:30Squeezing space with LaTeX<div dir="ltr" style="text-align: left;" trbidi="on">
I was trying to find ways to correct large vertical spaces between paragraphs. After serching a bit on internet, I got the following command along with other options. So I wanted to share it here and for my own future reference.<br />
Remove the spacing between paragraphs and have a small paragraph indentation<br />
<blockquote class="tr_bq">
<pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\setlength{<span style="font-weight: normal;">\parskip</span></span><span style="color: #e02020;"><span style="color: #cc0000;">}</span>{</span><span style="color: #2020c0; font-weight: normal;">0cm</span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;">\setlength{<span style="font-weight: normal;"><span style="font-weight: normal;">\parindent</span></span></span><span style="color: #e02020;"><span style="color: #cc0000;">}</span>{</span><span style="color: #2020c0; font-weight: normal;">1em</span><span style="color: #e02020;">}</span></pre>
</blockquote>
<pre class="latex" style="font-family: monospace;"><span style="color: #e02020;"> </span></pre>
Source:<br />
<a href="http://robjhyndman.com/hyndsight/squeezing-space-with-latex/">http://robjhyndman.com/hyndsight/squeezing-space-with-latex/</a><br />
<a href="http://www.terminally-incoherent.com/blog/2007/09/19/latex-squeezing-the-vertical-white-space/">http://www.terminally-incoherent.com/blog/2007/09/19/latex-squeezing-the-vertical-white-space/</a><br />
<a href="http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/squeeze.html">http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/squeeze.html</a><br />
<a href="https://ravirao.wordpress.com/2005/11/19/latex-tips-to-meet-publication-page-limits/">https://ravirao.wordpress.com/2005/11/19/latex-tips-to-meet-publication-page-limits/</a><br />
<br />
Make your text block as big as possible. The simplest way to do that is using the geometry package:<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6051"><td class="code" id="p605code1"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\usepackage</span><span style="color: #e02020;">[</span><span style="color: #c08020; font-weight: normal;">text={<span style="color: #2020c0; font-weight: normal;">16cm,24cm</span>}</span><span style="color: #e02020;">]{</span><span style="color: #2020c0; font-weight: normal;">geometry</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
Use a compact font such as Times Roman:<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6052"><td class="code" id="p605code2"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\usepackage</span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;">mathptmx</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
Remove space around section headings.<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6054"><td class="code" id="p605code4"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\usepackage[<span style="font-weight: normal;">compact</span></span><span style="color: #e02020;"><span style="color: #cc0000;">]</span>{</span><span style="color: #2020c0; font-weight: normal;">titlesec</span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;"><span style="font-weight: normal;">\titlespacing</span>{<span style="font-weight: normal;">\section</span></span><span style="color: #e02020;"><span style="color: #cc0000;">}</span>{</span><span style="color: #2020c0; font-weight: normal;">0pt</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">2ex</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">1ex</span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;"><span style="font-weight: normal;">\titlespacing</span>{<span style="font-weight: normal;">\subsection</span></span><span style="color: #e02020;"><span style="color: #cc0000;">}</span>{</span><span style="color: #2020c0; font-weight: normal;">0pt</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">1ex</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">0ex</span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;"><span style="font-weight: normal;">\titlespacing</span>{<span style="font-weight: normal;">\subsubsection</span></span><span style="color: #e02020;"><span style="color: #cc0000;">}</span>{</span><span style="color: #2020c0; font-weight: normal;">0pt</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">0.5ex</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">0ex</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
Beware of enumerated and itemized lists. Instead, replace them with compact lists.<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6055"><td class="code" id="p605code5"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\usepackage</span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;">paralist</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6056"><td class="code" id="p605code6"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;"><span style="font-weight: normal;">\begin</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">compactitem</span></span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;">\item</span> ...
<span style="color: #cc0000;"><span style="font-weight: normal;">\end</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">compactitem</span></span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;"><span style="font-weight: normal;">\begin</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">compactenum</span></span><span style="color: #e02020;">}</span>
<span style="color: #cc0000;">\item</span> ...
<span style="color: #cc0000;"><span style="font-weight: normal;">\end</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">compactenum</span></span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
If you are allowed, switching to double column can save heaps of space.<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6057"><td class="code" id="p605code7"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;">\usepackage</span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;">multicols</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6058"><td class="code" id="p605code8"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;"><span style="font-weight: normal;">\begin</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">multicols</span></span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">2</span><span style="color: #e02020;">}</span>
...
<span style="color: #cc0000;"><span style="font-weight: normal;">\end</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;"><span style="color: #0000d0; font-weight: normal;">multicols</span></span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
If the rules say 12pt, you can usually get away with 11.5pt without anyone noticing:<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p6059"><td class="code" id="p605code9"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;"><span style="font-weight: normal;">\begin</span>{<span style="font-weight: normal;"><span style="font-weight: normal;">document</span></span>}<span style="font-weight: normal;">\fontsize</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;">11.5</span><span style="color: #e02020;">}{</span><span style="color: #2020c0; font-weight: normal;">14</span><span style="color: #e02020;">}</span><span style="color: maroon; font-weight: normal;">\rm</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
When you get desperate, you can squeeze the inter-line spacing using<br />
<blockquote class="tr_bq">
<div class="wp_codebox">
<table><tbody>
<tr id="p60510"><td class="code" id="p605code10"><pre class="latex" style="font-family: monospace;"><span style="color: #cc0000;"><span style="font-weight: normal;">\linespread</span></span><span style="color: #e02020;">{</span><span style="color: #2020c0; font-weight: normal;">0.9</span><span style="color: #e02020;">}</span></pre>
</td></tr>
</tbody></table>
</div>
</blockquote>
There is also a <code><a href="http://www.ctan.org/tex-archive/macros/latex/contrib/savetrees/">savetrees</a></code>
package which does a lot of squeezing, but the results don’t always
look nice, so it is better to try one or more of the above tricks
instead.<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-49923924041584735432015-05-26T20:02:00.001+05:302015-06-09T12:29:32.026+05:30DL reading list for new students in LISA LAB<div dir="ltr" style="text-align: left;" trbidi="on">
<b></b><br />
<b></b><br />
https://docs.google.com/document/d/1IXF3h0RU5zz4ukmTrVKVotPQypChscNGf5k6E25HGvA/edit#heading=h.5r7p5dbrilt4<br />
<br />
http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/WebHome<br />
<br />
http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/ReadingOnDeepNetworks </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-27698342295991078122015-03-31T15:08:00.000+05:302015-04-14T12:06:14.474+05:30Setting up Theano on Ubuntu 14.04<div dir="ltr" style="text-align: left;" trbidi="on">
After 2 days of non-stop struggle, I was able to make theano work with my GPU NVIDIA GeForce GTX 860M.<br />
<br />
<b><span style="color: red;">TRY AT YOUR OWN RISK!!</span></b><br />
<br />
Settings which worked are as follows.<br />
DONOT install <i>bumblebee</i><br />
<br />
Run Following command to see if your NVIDIA GPU device is being detected. If not, this BLOG wont help. Sorry. <br />
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>lspci | grep -i NVIDIA</i></span></blockquote>
<br />
Install the following:<br />
Driver: NVIDIA 340.76<br />
Cuda : 6.5 toolkit<br />
<br />
<ul style="text-align: left;">
<li>Switch to NVIDIA card (If you dont have this command try to get it, without changing the graphics drivers, conflicting drivers can be blacklisted, see next point).</li>
</ul>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>prime-switch nvidia</i></span></blockquote>
</blockquote>
<ul style="text-align: left;">
<li>Blacklist other driver which can create conflicts:</li>
</ul>
<blockquote class="tr_bq">
create <span style="color: #e69138;"><i>/etc/modprobe.d/blacklist-file-drivers.conf</i></span> File with blacklisted drivers. Use command <span style="color: #e06666;"><i>ubuntu-drivers devices</i></span> to get a list of nvidia drivers:</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<div style="text-align: left;">
<span style="color: #45818e;"><i>blacklist nvidia-349<br />blacklist nvidia-346<br />blacklist xserver-xorg-video-nouveau </i></span></div>
</blockquote>
</blockquote>
<blockquote class="tr_bq">
To list all installed Graphics Drivers (Useful while blacklisting drivers) <i></i></blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>ubuntu-drivers devices</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
Note: DONOT <span style="color: #45818e;"><i>blacklist nvidia-340</i></span></blockquote>
<ul style="text-align: left;">
<li>Make sure the following command works without error.</li>
</ul>
<blockquote class="tr_bq">
<blockquote class="tr_bq" style="text-align: left;">
<span style="color: #e06666;"><i>nvidia-modprobe<br />nvidia-settings</i></span><br />
<span style="color: #e06666;"><i>nvidia-smi</i></span></blockquote>
</blockquote>
<ul style="text-align: left;">
<li> Find <span style="color: #e69138;"><i>/usr/local/cuda-6.5/samples/1_Utilities/deviceQuery</i></span> folder for your system.</li>
</ul>
<blockquote class="tr_bq">
use make command to create executable. </blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;">cd <i>/usr/local/cuda-6.5/samples/1_Utilities/deviceQuery/</i></span> <br />
<span style="color: #e06666;">sudo make </span> </blockquote>
</blockquote>
<blockquote class="tr_bq">
run <span style="color: #e06666;"><i>/usr/local/cuda-6.5/samples/1_Utilities/deviceQuery/deviceQuery</i></span></blockquote>
<blockquote class="tr_bq">
You should get the following results: </blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #45818e;">deviceQuery,
CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version =
6.0, NumDevs = 1, Device0 = GeForce GTX 860M</span></blockquote>
</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #45818e;">Result = PASS</span></blockquote>
</blockquote>
<ul style="text-align: left;">
<li>Install theano from GIT (First install dependencies from theano website):</li>
</ul>
<blockquote class="tr_bq">
<blockquote class="tr_bq" style="text-align: left;">
<span style="color: #e06666;"><i>sudo apt-get install python-numpy python-scipy python-dev python-pip python-nose g++ libopenblas-dev git</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>sudo pip uninstall theano</i><i> </i></span><br />
<span style="color: #e06666;"><i>sudo pip install git+git://github.com/Theano/Theano.git</i></span></blockquote>
</blockquote>
<div>
<br />
In <span style="color: #e69138;"><i>/usr/local/cuda-6.5/targets/x86_64-linux/include/host_config.h</i></span>, cuda-6.5 supports gcc-4.8 g++-4.8 so one needs to install these and make links to gcc and g++ respectively.<br />
example: <span style="color: #e06666;"><i>sudo ln -s /usr/bin/gcc-4.8 /usr/local/cuda/bin/gcc</i></span><br />
<ul style="text-align: left;">
<li>Create <span style="color: #e69138;">~<i>/.theanorc</i></span> File with following content:</li>
</ul>
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>[global]<br />floatX = float32<br />device = gpu<br /><br />[nvcc]<br />fastmath = True<br /><br />[cuda]<br />root=/usr/local/cuda-6.5/</i></span></blockquote>
<ul style="text-align: left;">
<li> Make a python file to test gpu say <span style="color: #e69138;">test.py</span>: </li>
</ul>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #45818e;">from theano import function, config, shared, sandbox</span><br />
<span style="color: #45818e;">import theano.tensor as T</span><br />
<span style="color: #45818e;">import numpy</span><br />
<span style="color: #45818e;">import time</span><br />
<span style="color: #45818e;"><br /></span>
<span style="color: #45818e;">vlen = 10 * 30 * 768 # 10 x #cores x # threads per core</span><br />
<span style="color: #45818e;">iters = 1000</span><br />
<span style="color: #45818e;"><br /></span>
<span style="color: #45818e;">rng = numpy.random.RandomState(22)</span><br />
<span style="color: #45818e;">x = shared(numpy.asarray(rng.rand(vlen), config.floatX))</span><br />
<span style="color: #45818e;">f = function([], T.exp(x))</span><br />
<span style="color: #45818e;">print f.maker.fgraph.toposort()</span><br />
<span style="color: #45818e;">t0 = time.time()</span><br />
<span style="color: #45818e;">for i in xrange(iters):</span><br />
<span style="color: #45818e;"> r = f()</span><br />
<span style="color: #45818e;">t1 = time.time()</span><br />
<span style="color: #45818e;">print 'Looping %d times took' % iters, t1 - t0, 'seconds'</span><br />
<span style="color: #45818e;">print 'Result is', r</span><br />
<span style="color: #45818e;">if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):</span><br />
<span style="color: #45818e;"> print 'Used the cpu'</span><br />
<span style="color: #45818e;">else:</span><br />
<span style="color: #45818e;"> print 'Used the gpu'</span></blockquote>
</blockquote>
<br />
<blockquote class="tr_bq">
Once you run the above python file:</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;">s<i>udo python test.py</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
You might get an error:</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<i>Failed to compile <span style="color: #e69138;">cuda_ndarray.cu</span>: <span style="color: #e69138;">libcublas.so.6.5</span>: cannot open shared object file: No such file or directory</i></blockquote>
</blockquote>
<blockquote class="tr_bq">
So you need to locate and configure that path:</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>locate libcublas.so.6.5</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
Add the following library path to <span style="color: #e69138;"><i>~/.bashrc</i></span> (Second path has <span style="color: #e69138;"><i>libcublas.so.6.5</i></span>)</blockquote>
<blockquote class="tr_bq" style="text-align: left;">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>sudo ldconfig /usr/local/cuda-6.5/lib64/<br />sudo ldconfig /usr/local/cuda-6.5/targets/x86_64-linux/lib/</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>export LD_LIBRARY_PATH=/usr/local/cuda-6.5/targets/x86_64-linux/lib:$\$$LD_LIBRARY_PATH</i></span><br />
<span style="color: #e06666;"><br /></span>
<span style="color: #e06666;"><i>export PATH=/usr/local/cuda-6.5/bin:$\$$PATH<br />export PATH=/usr/local/cuda-6.5/targets/x86_64-linux/lib:$\$$PATH</i></span></blockquote>
</blockquote>
<blockquote class="tr_bq">
Now run the gpu code. <br />
<blockquote class="tr_bq">
<span style="color: #e06666;"><i>sudo python test.py</i></span></blockquote>
I get roughly 10x speedup with GPU.</blockquote>
I screwed up many times with different versions of drivers, if you do the same. You might not get a login screen (black screen). The use <span style="color: #e06666;">ctrl + Alt + F1</span> to goto command mode.<br />
Remove <span style="color: #e69138;">xorg.conf</span><br />
<span style="color: #e06666;">sudo rm /etc/X11/xorg.conf</span><br />
<span style="color: #e06666;"><br /></span>
<span style="color: #e06666;">sudo service lightdm stop</span><br />
<span style="color: #e06666;">sudo service lightdm start</span><br />
<br />
You possibly have got you login screen. You might also be trapped in loop, where login screen asks for password and goes in and again login screen asks for password. I hope you dont get this situation.<br />
(I added:<br />
<span style="color: #e06666;"><i>nvidia-modprobe </i></span><span style="color: #e06666;"><i> </i></span><br />
<span style="color: #e06666;"><i>prime-switch intel</i></span><br />
<i>to end of <span style="color: #e69138;">~/.bashrc</span></i> because there were login loop issues with got resolved by this. I am not sure but it works.)<br />
<br />
Although I have written this for my record but I hope it might help someone. :)<br />
<br />
Hope this helps.<br />
<br />
<br />
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com8tag:blogger.com,1999:blog-5589961057231968257.post-80452319356750282132015-02-14T21:25:00.004+05:302015-02-14T21:25:54.991+05:30On convex Neural Networks<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="https://hal.archives-ouvertes.fr/hal-01098505/document" target="_blank">Breaking the Curse of Dimensionality with Convex Neural Networks</a> - Francis Bach <a href="http://www.di.ens.fr/~fbach/fbach_cifar_2014.pdf" target="_blank">slides</a><br />
<br />
<a href="http://papers.nips.cc/paper/2800-convex-neural-networks.pdf" target="_blank">Convex Neural Networks</a> - Bengio et al<br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com4tag:blogger.com,1999:blog-5589961057231968257.post-62301713326198424292015-02-04T13:28:00.000+05:302015-02-04T13:28:10.707+05:30Scale of weight initialization<div dir="ltr" style="text-align: left;" trbidi="on">
I was listening to talking machines interview(Illya Sutskuver) and he made important point on initialization scale for deep neural network.<br />
It seems too small weights will significantly decay the signal and large would be unstable. This also brings an important point of stability issues of neural nets and connections to eigen value problem and random matrix theory as pointed out by Ryan Adams. </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-32089730551848371712015-01-31T10:56:00.000+05:302015-01-31T11:02:34.621+05:30Adaptive Learning Rates for NN<div dir="ltr" style="text-align: left;" trbidi="on">
I was going through Hinton's lectures and I found something interesting and wanted to share. <br />
It is a very usual case that magnitude of gradient for different layers are different. The fan-in of a unit determines the size of the “overshoot” effects caused by simultaneously changing many of the incoming weights of a unit to correct the same error. So we can use local adaptive gains $g_{ij}$ for gradients.<br />
<br />
So update rule becomes: <br />
\[\Delta w_{ij} = - \epsilon g_{ij} \frac{\partial E}{\partial w_{ij}}\]<br />
<br />
How we adjust the gains is by additive increment and multiplicative decrement.<br />
<blockquote class="tr_bq">
if $( \frac{\partial E}{\partial w_{ij}}(t-1) * \frac{\partial E}{\partial w_{ij}}(t) ) > 0 $</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
then $g_{ij}(t) = g_{ij}(t-1) + 0.05$</blockquote>
</blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
else $g_{ij}(t) = g_{ij}(t-1) * 0.95$</blockquote>
</blockquote>
Other things to note are:<br />
<ul style="text-align: left;">
<li>$g_{ij}$ should be withing some bounds like [0.1,10] or [0.01,100]</li>
<li>Use of full batch or large mini batches(nothing crazy should happen because of sampling error)</li>
<li>Use agreement in sign of current gradient and current velocity for that weight.(adaptive learning rates combined with momentum).</li>
</ul>
Updates for momentum method:<br />
Weight change is current velocity<br />
$$ \Delta w_{ij}(t) = v(t) = \alpha v(t-1) - \epsilon \frac{\partial E}{\partial w_{ij}}(t) = \alpha \Delta w_{ij}(t-1) - \epsilon \frac{\partial E}{\partial w_{ij}}(t)$$<br />
velocity $v(t) = \alpha v(t-1) - \epsilon \frac{\partial E}{\partial w_{ij}}(t)$, here $\alpha$ is slightly less than 1.<br />
Momentum method builds up speed in directions with a gentle but consistent gradient. Use of small initial momntum $\alpha = 0.5$ and later to $\alpha = 0.9$. </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com3tag:blogger.com,1999:blog-5589961057231968257.post-42144862838531006332014-12-25T09:45:00.004+05:302014-12-25T09:49:45.171+05:30Better Matlab User<div dir="ltr" style="text-align: left;" trbidi="on">
Yet Another Guide TO Matlab(YAGTOM)<br />
<a href="http://ubcmatlabguide.github.io/">http://ubcmatlabguide.github.io/</a><br />
<br />
Writing Fast Matlab Codes<br />
<a href="http://optlab.mcmaster.ca/~yzinchen/fast-matlab-code.pdf">http://optlab.mcmaster.ca/~yzinchen/fast-matlab-code.pdf</a><br />
<br />
Some Guidelines<br />
<a href="http://mlg.eng.cam.ac.uk/creed/Spring2011/pdf/matlab-guidelines.pdf">http://mlg.eng.cam.ac.uk/creed/Spring2011/pdf/matlab-guidelines.pdf</a><br />
<br />
Simple Mex tutorial<br />
<a href="http://www.shawnlankton.com/2008/03/getting-started-with-mex-a-short-tutorial/">http://www.shawnlankton.com/2008/03/getting-started-with-mex-a-short-tutorial/</a> </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-40508558905841957502014-12-17T07:33:00.000+05:302014-12-17T07:33:42.468+05:30RNN Libraries<div dir="ltr" style="text-align: left;" trbidi="on">
GroundHog<br />
<a href="https://github.com/pascanur/GroundHog">https://github.com/pascanur/GroundHog</a><br />
<br />
An extension to Torch7's nn package<br />
<a href="https://github.com/clementfarabet/lua---nnx#nnx.Recurrent">https://github.com/clementfarabet/lua---nnx#nnx.Recurrent</a><br />
<br />
A midi learning machine.
<br />
<a href="https://github.com/blr246/midi-machine">https://github.com/blr246/midi-machine</a><br />
<br />
RNNLIB<br />
<a href="http://sourceforge.net/projects/rnnl/">http://sourceforge.net/projects/rnnl/</a> <br />
<br />
CURRENNT<br />
<a href="http://sourceforge.net/projects/currennt/">http://sourceforge.net/projects/currennt/</a><br />
<br />
BLOCKS<br />
<a href="http://blocks.readthedocs.org/en/latest/">http://blocks.readthedocs.org/en/latest/</a><br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com1tag:blogger.com,1999:blog-5589961057231968257.post-82821907929632976112014-07-29T19:36:00.000+05:302014-07-29T19:36:23.976+05:30MLSS 2014 Pitsburg Videolectures <div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://mlss2014.com/schedule.html" target="_blank"><u><b>Schedule and Content</b></u></a><br />
<ul style="text-align: left;">
<li>Introduction to Machine Learning</li>
<li>Collaborative Filtering</li>
<li>Recommender Systems</li>
<li>Spectral and Tensor Methods</li>
<li>Deep Learning</li>
<li>Parameter Server </li>
<li>NELL</li>
<li>Data Stream Analytics</li>
<li>Scaling Machine Learning</li>
<li>Resource-aware distributed Machine Learning </li>
</ul>
<br />
<a href="https://www.youtube.com/playlist?list=PLZSO_6-bSqHQCIYxE3ycGLXHMjK3XV7Iz" target="_blank"><u><b>VideoLectures on Youtube</b></u></a></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-60977680767374055822014-02-17T17:23:00.000+05:302015-03-22T12:05:05.530+05:30Legendre Transform, conjugate functions and Quasi-(Concavity/Convexity)<div dir="ltr" style="text-align: left;" trbidi="on">
These are some helpful links to better understand conjugate functions:<br />
<br />
http://maze5.net/?page_id=733<br />
https://en.wikipedia.org/wiki/Legendre_transformation<br />
http://www.onmyphd.com/?p=legendre.fenchel.transform<br />
http://physics.stackexchange.com/questions/4384/physical-meaning-of-legendre-transformation<br />
http://jmanton.wordpress.com/2010/11/21/introduction-to-the-legendre-transform/<br />
<br />
<u><b>Quasiconcavity and Quasiconvexity </b></u><br />
<a href="http://www.economics.utoronto.ca/osborne/MathTutorial/DFI.HTM#s:levelcurves">http://www.economics.utoronto.ca/osborne/MathTutorial/DFI.HTM#s:levelcurves</a><br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-77811727982517197092014-01-03T19:30:00.002+05:302014-01-03T19:33:06.236+05:30Erik Sudderth's Courses<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<u><b>Introduction to Machine Learning</b></u><br />
<a href="http://cs.brown.edu/courses/csci1950-f/spring2012/calendar.html">http://cs.brown.edu/courses/csci1950-f/spring2012/calendar.html</a> <br />
<a href="http://cs.brown.edu/courses/csci1420/calendar.html">http://cs.brown.edu/courses/csci1420/calendar.html</a><br />
<br />
<u><b>Probabilistic Graphical Models</b></u><br />
<a href="http://cs.brown.edu/courses/csci2950-p/lectures.html">http://cs.brown.edu/courses/csci2950-p/lectures.html</a><br />
<br />
<u><b>Learning & Inference in Probabilistic Graphical Models</b></u><br />
<a href="http://cs.brown.edu/courses/csci2950-p/spring2010/lectures.html">http://cs.brown.edu/courses/csci2950-p/spring2010/lectures.html</a><br />
<br />
<u><b>Applied Bayesian Nonparametrics</b></u> <br />
<a href="http://cs.brown.edu/courses/csci2950-p/fall2011/lectures.html">http://cs.brown.edu/courses/csci2950-p/fall2011/lectures.html</a></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-63034535387890455012014-01-03T19:20:00.004+05:302014-01-03T19:33:58.789+05:30David Blei's trio<div dir="ltr" style="text-align: left;" trbidi="on">
<u><b>COS513: Foundations of Probabilistic Modeling</b></u><br />
<a href="http://www.cs.princeton.edu/courses/archive/fall10/cos513/">http://www.cs.princeton.edu/courses/archive/fall10/cos513/</a><br />
<br />
<div style="text-align: left;">
<u><b>COS597C: Advanced Methods in Probabilistic Modeling </b></u></div>
<a href="http://www.cs.princeton.edu/courses/archive/fall11/cos597C/">http://www.cs.princeton.edu/courses/archive/fall11/cos597C/</a><br />
<br />
<u><b>COS597C: Bayesian Nonparametrics </b></u><br />
<a href="http://www.cs.princeton.edu/courses/archive/fall07/cos597C/syllabus.html">http://www.cs.princeton.edu/courses/archive/fall07/cos597C/syllabus.html</a><br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-43602588808787912312013-10-17T10:58:00.000+05:302013-10-17T10:58:02.803+05:30Visual Recognition and Search Topics in Information Processing<div dir="ltr" style="text-align: left;" trbidi="on">
http://rogerioferis.com/VisualRecognitionAndSearch/Resources.html</div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-84301905873759874832013-10-11T19:06:00.001+05:302013-11-22T11:23:12.752+05:30Selected links for probabilistic models<div dir="ltr" style="text-align: left;" trbidi="on">
NPBAYES resourses by peterorbanz <br />
http://stat.columbia.edu/~porbanz/talks/npb-tutorial.html <br />
<br />
Glossary by tom minka <br />
http://alumni.media.mit.edu/~tpminka/statlearn/glossary/<br />
<br />
PGM by murphy<br />
http://www.cs.ubc.ca/~murphyk/Teaching/Stat521A-Spring09/index.html<br />
<br />
Probabilistic graphical models - advanced methods by murphy<br />
https://sites.google.com/site/cs228tspring2012/<br />
<br />
STA561: Probabilistic Machine Learning: Fall 2013<br />
http://genome.duke.edu/labs/engelhardt/courses/sta561.html <br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-28247080283942257182013-09-27T17:20:00.001+05:302013-11-22T08:08:57.230+05:30Bayesian non-parametrics idea<div dir="ltr" style="text-align: left;" trbidi="on">
I dont know why but this tutorial is really approachable and makes the idea clear. <br />
Awesome high level overview of bayesian non-parametrics by Zoubin Ghahramani<br />
<a href="http://mlg.eng.cam.ac.uk/pub/pdf/Gha12.pdf">http://mlg.eng.cam.ac.uk/pub/pdf/Gha12.pdf</a><br />
<a href="https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2010-0006.pdf">https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2010-0006.pdf</a><br />
<br />
CVPR2012 tutorial applied bayesian non-parametrics<br />
http://cs.brown.edu/~sudderth/bnpCVPR12/materials.html<br />
http://cs.brown.edu/~sudderth/bnpCVPR12/resources.html <br />
http://cs.brown.edu/courses/csci2950-p/lectures.html<br />
http://www.cs.princeton.edu/courses/archive/fall07/cos597C/syllabus.html <br />
<br />
Topics in Machine Learning: Bayesian Methods for Machine Learning <br />
<a href="http://www.cs.utoronto.ca/%7Eradford/csc2541.S11/">http://www.cs.utoronto.ca/~radford/csc2541.S11/</a> <br />
<br />
<a href="http://www.cs.berkeley.edu/~jordan/courses/294-fall09/lectures/nonparametric/">http://www.cs.berkeley.edu/~jordan/courses/294-fall09/lectures/nonparametric/</a><br />
<br />
Best place for DP chapter 2 of thesis <br />
<a href="http://cs.brown.edu/~sudderth/papers/sudderthPhD.pdf">http://cs.brown.edu/~sudderth/papers/sudderthPhD.pdf</a><br />
<br />
<a href="http://www.cs.toronto.edu/~radford/mixmc.abstract.html">http://www.cs.toronto.edu/~radford/mixmc.abstract.html</a><br />
<a href="http://www.cs.toronto.edu/~radford/ftp/bmm.pdf">http://www.cs.toronto.edu/~radford/ftp/bmm.pdf</a><br />
<a href="http://www.cs.utoronto.ca/~radford/ftp/review.pdf">http://www.cs.utoronto.ca/~radford/ftp/review.pdf</a><br />
<a href="http://www.cs.toronto.edu/~radford/ftp/mixmc.pdf">http://www.cs.toronto.edu/~radford/ftp/mixmc.pdf</a><br />
<a href="http://www.cs.toronto.edu/~radford/ftp/mixsplit.pdf">http://www.cs.toronto.edu/~radford/ftp/mixsplit.pdf</a> <br />
<a href="http://www.cs.berkeley.edu/~jordan/papers/hdp.pdf">http://www.cs.berkeley.edu/~jordan/papers/hdp.pdf</a><br />
<a href="http://books.nips.cc/papers/files/nips18/NIPS2005_0130.pdf">http://books.nips.cc/papers/files/nips18/NIPS2005_0130.pdf</a><br />
<a href="http://books.nips.cc/papers/files/nips19/NIPS2006_0716.pdf">http://books.nips.cc/papers/files/nips19/NIPS2006_0716.pdf</a><br />
<br />
<a href="http://npbayes.wikidot.com/references">http://npbayes.wikidot.com/references</a> </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-42337854813904825852013-09-21T05:03:00.003+05:302013-09-25T10:36:01.946+05:30Awesome Video Lectures (I liked) @ videolectures.net<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="color: #3d85c6;">Approximate Inference</span>(Expectation Propagation) by Tom Minka<br />
<a href="http://videolectures.net/mlss09uk_minka_ai/#c3275">http://videolectures.net/mlss09uk_minka_ai/#c3275</a><br />
<br />
<span style="color: #3d85c6;">Topic Models</span> by David Blei<br />
<a href="http://videolectures.net/mlss09uk_blei_tm/">http://videolectures.net/mlss09uk_blei_tm/</a><br />
<br />
<span style="color: #3d85c6;">Graphical Models and message-passing algorithms <span style="color: #f3f3f3;">by Wainwright</span></span><br />
<a href="http://videolectures.net/mlss2011_wainwright_messagepassing/">http://videolectures.net/mlss2011_wainwright_messagepassing/</a> <br />
<br />
MCMC by murray<br />
<a href="http://videolectures.net/mlss09uk_murray_mcmc/">http://videolectures.net/mlss09uk_murray_mcmc/</a><br />
<br />
Bayesian Inference and Gaussian Process by Carl Rasmussen<br />
<a href="http://videolectures.net/mlss07_rasmussen_bigp/">http://videolectures.net/mlss07_rasmussen_bigp/</a><br />
<br />
<span style="color: #3d85c6;">Structure prediction: A large margin approach</span> by ben taskar<br />
<a href="http://videolectures.net/aop07_taskar_pgm/">http://videolectures.net/aop07_taskar_pgm/</a><br />
<br />
<span style="color: #3d85c6;">Algorithms for Predicting Structured Data(OK for beginner, intuitive)</span><br />
http://videolectures.net/ecmlpkdd2010_gartner_vembu_apsd/ <br />
<br />
Nonparametric bayesian<br />
<a href="http://videolectures.net/mlss2011_teh_nonparametrics/">http://videolectures.net/mlss2011_teh_nonparametrics/</a><br />
<a href="http://videolectures.net/mlss09uk_teh_nbm/">http://videolectures.net/mlss09uk_teh_nbm/</a><br />
<a href="http://videolectures.net/mlss09uk_orbanz_fnbm/">http://videolectures.net/mlss09uk_orbanz_fnbm/</a><br />
<a href="http://videolectures.net/mlss06au_tresp_dpnbm/">http://videolectures.net/mlss06au_tresp_dpnbm/</a> <br />
<br />
Dirichlet Processes: Tutorial and Practical Course<br />
<a href="http://videolectures.net/mlss2012_gorur_dirichlet_practical/">http://videolectures.net/mlss2012_gorur_dirichlet_practical/</a><br />
<a href="http://videolectures.net/mlss07_teh_dp/">http://videolectures.net/mlss07_teh_dp/</a><br />
<br />
<span style="color: #3d85c6;">Dimentionality Reduction</span> by Neil Lawrence<br />
<a href="http://videolectures.net/mlss2012_lawrence_dimensionality_reduction/">http://videolectures.net/mlss2012_lawrence_dimensionality_reduction/</a><br />
<br />
<span style="color: #3d85c6;">Support Vector Machines</span><br />
<a href="http://videolectures.net/mlss06tw_lin_svm/">http://videolectures.net/mlss06tw_lin_svm/</a><br />
<br />
<span style="color: #3d85c6;">Low Rank Modelling</span> by Emmanuel Candes<br />
<a href="http://videolectures.net/mlss2011_candes_lowrank/">http://videolectures.net/mlss2011_candes_lowrank/</a><br />
<br />
Geometric Methods and Manifold Learning by Niyogi<br />
<a href="http://videolectures.net/mlss09us_niyogi_belkin_gmml/">http://videolectures.net/mlss09us_niyogi_belkin_gmml/</a><br />
<br />
<span style="color: #3d85c6;">MAP inference in Discrete Models</span>(Max flow / st cut / submodularity)<br />
http://videolectures.net/bmvc2012_kohli_discrete_models/ </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com2tag:blogger.com,1999:blog-5589961057231968257.post-44835505424149511542013-09-10T23:18:00.001+05:302013-09-10T23:30:33.807+05:30Latent Dirichilet Allocation and variational method<div dir="ltr" style="text-align: left;" trbidi="on">
I made a presentation on Latent Dirichlet Allocation with its inference and mean field method for study group.<br />
I thought I would share it so it might be of some help to someone. It is assumed you know <a href="http://en.wikipedia.org/wiki/Bayesian_inference#Formal_description_of_Bayesian_inference" target="_blank">basic Inference</a>, basic probability and standard distributions, <a href="http://cs229.stanford.edu/notes/cs229-notes8.pdf" target="_blank">EM algorithm</a>, basic sampling.<br />
<a href="https://www.dropbox.com/s/qsh8seuy527z30s/variational.pdf">https://www.dropbox.com/s/qsh8seuy527z30s/variational.pdf </a><br />
<br />
If you think some credits might be missing then mail me. </div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-85822141216902078812013-08-29T21:47:00.001+05:302013-08-29T21:47:09.541+05:30Bayesian Critique of Statistics in Health: The Great Health Hoax<div dir="ltr" style="text-align: left;" trbidi="on">
Yet another support for bayesians:<br />
http://www2.isye.gatech.edu/~brani/isyebayes/bank/pvalue.pdf</div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-48465351631793739772013-08-12T22:43:00.004+05:302013-08-12T22:43:54.247+05:30Path to becoming Data Scientist - Metro Map<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfAHdBpUyO7XwgUZSeUZmItjdn2mw8sWQjeSWdYtm4rzrV29iEFBmm1VM43jmdqGbKsEjwlgxb059k8_N-XWMsJCsm9UFNFhi9Fq086lLjHmxS0JNs3yGU7__aTiFsVCS-Eu8yDdCirPQ/s1600/data_scientist_path.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="516" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfAHdBpUyO7XwgUZSeUZmItjdn2mw8sWQjeSWdYtm4rzrV29iEFBmm1VM43jmdqGbKsEjwlgxb059k8_N-XWMsJCsm9UFNFhi9Fq086lLjHmxS0JNs3yGU7__aTiFsVCS-Eu8yDdCirPQ/s640/data_scientist_path.png" width="640" /></a><br />
Source: http://nirvacana.com/thoughts/becoming-a-data-scientist/</div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com1tag:blogger.com,1999:blog-5589961057231968257.post-67413416870249682152013-07-28T09:51:00.002+05:302013-07-28T09:51:13.124+05:30Latex symbol classifier<div dir="ltr" style="text-align: left;" trbidi="on">
I saw this interesting site which classifies Latex symbols and gives its command.<br />
<a href="http://detexify.kirelabs.org/classify.html">http://detexify.kirelabs.org/classify.html</a></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-84798713876461489632013-04-28T10:26:00.003+05:302013-05-01T10:50:58.010+05:30From Mike Jordan on what people should learn for ML.<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="background-color: #cccccc;"><span style="background-color: black;"></span></span><span style="color: #cccccc;"><br /></span>
<span style="color: #cccccc;">From a post from <span class="comment">news.ycombinator.com</span></span><br />
<span style="color: #cccccc;">I personally think that everyone in machine learning should be
(completely) familiar with essentially all of the material in the
following intermediate-level statistics book:</span><br />
<span style="color: #cccccc;">1.)
Casella, G. and Berger, R.L. (2001).
"Statistical Inference"
Duxbury Press.</span><br />
<span style="color: #cccccc;">For a slightly more advanced book that's quite clear on mathematical
techniques, the following book is quite good:</span><br />
<span style="color: #cccccc;">2.)
Ferguson, T. (1996).
"A Course in Large Sample Theory"
Chapman & Hall/CRC.</span><br />
<span style="color: #cccccc;">You'll need to learn something about asymptotics at some point, and
a good starting place is:</span><br />
<span style="color: #cccccc;">3.)
Lehmann, E. (2004).
"Elements of Large-Sample Theory"
Springer.</span><br />
<span style="color: #cccccc;">Those are all frequentist books. You should also read something
Bayesian:</span><br />
<span style="color: #cccccc;">4.)
Gelman, A. et al. (2003).
"Bayesian Data Analysis"
Chapman & Hall/CRC.</span><br />
<span style="color: #cccccc;">and you should start to read about Bayesian computation:</span><br />
<span style="color: #cccccc;">5.)
Robert, C. and Casella, G. (2005).
"Monte Carlo Statistical Methods"
Springer.</span><br />
<span style="color: #cccccc;">On the probability front, a good intermediate text is:</span><br />
<span style="color: #cccccc;">6.)
Grimmett, G. and Stirzaker, D. (2001).
"Probability and Random Processes"
Oxford.</span><br />
<span style="color: #cccccc;">At a more advanced level, a very good text is the following:</span><br />
<span style="color: #cccccc;">7.)
Pollard, D. (2001).
"A User's Guide to Measure Theoretic Probability"
Cambridge.</span><br />
<span style="color: #cccccc;">The standard advanced textbook is
Durrett, R. (2005).
"Probability: Theory and Examples"
Duxbury.</span><br />
<span style="color: #cccccc;">Machine learning research also reposes on optimization theory.
A good starting book on linear optimization that will prepare
you for convex optimization:</span><br />
<span style="color: #cccccc;">8.)
Bertsimas, D. and Tsitsiklis, J. (1997).
"Introduction to Linear Optimization"
Athena.</span><br />
<span style="color: #cccccc;">And then you can graduate to:</span><br />
<span style="color: #cccccc;">9.)
Boyd, S. and Vandenberghe, L. (2004).
"Convex Optimization"
Cambridge.</span><br />
<span style="color: #cccccc;">Getting a full understanding of algorithmic linear algebra is
also important. At some point you should feel familiar with
most of the material in</span><br />
<span style="color: #cccccc;">10.)
Golub, G., and Van Loan, C. (1996).
"Matrix Computations"
Johns Hopkins.</span><br />
<span style="color: #cccccc;">It's good to know some information theory. The classic is:</span><br />
<span style="color: #cccccc;">11.)
Cover, T. and Thomas, J.
"Elements of Information Theory"
Wiley.</span><br />
<span style="color: #cccccc;">Finally, if you want to start to learn some more abstract math,
you might want to start to learn some functional analysis (if you
haven't already). Functional analysis is essentially linear algebra
in infinite dimensions, and it's necessary for kernel methods, for
nonparametric Bayesian methods, and for various other topics.
Here's a book that I find very readable:</span><br />
<span style="color: #cccccc;">12.)
Kreyszig, E. (1989).
"Introductory Functional Analysis with Applications"
Wiley.</span><br />
<span style="color: #cccccc;"><br /></span>
<span style="color: #cccccc;"><br /></span>
<span style="color: #cccccc;"><span class="comment">Source: <a href="https://news.ycombinator.com/item?id=1055389">https://news.ycombinator.com/item?id=1055389</a></span></span><br />
<br />
<a href="http://homepages.inf.ed.ac.uk/sgwater/reading_list.html"><span style="color: #cccccc;"><span class="comment">http://homepages.inf.ed.ac.uk/sgwater/reading_list.html</span></span></a><br />
<span style="color: #cccccc;"><span class="comment"><a href="http://cocosci.berkeley.edu/tom/bayes.html#general">http://cocosci.berkeley.edu/tom/bayes.html#general</a> </span></span><br />
<br />
<span style="color: #cccccc;"><span class="comment">Video lectures on functional analysis:</span></span><br />
<a href="http://www.youtube.com/watch?v=ebesx6pF8mg&list=PLBC73B96341ECF455"><span style="color: #cccccc;"><span class="comment">http://www.youtube.com/watch?v=ebesx6pF8mg&list=PLBC73B96341ECF455</span></span></a><br />
<span style="color: #cccccc;"><span class="comment"> <a href="http://www.youtube.com/watch?v=7IIw_U8rv4Q&list=PL2B92DCEAB0A249CD">http://www.youtube.com/watch?v=7IIw_U8rv4Q&list=PL2B92DCEAB0A249CD</a></span></span></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com0tag:blogger.com,1999:blog-5589961057231968257.post-65251595574072879932013-04-22T17:54:00.007+05:302013-05-01T11:07:40.625+05:30Probabilistic programming<div dir="ltr" style="text-align: left;" trbidi="on">
<u>This is nice a nice post about probabilistic programming.</u><br />
http://radar.oreilly.com/2013/04/probabilistic-programming.html<br />
http://tm.durusau.net/?cat=413 <br />
<br />
http://zinkov.com/posts/2012-06-27-why-prob-programming-matters/<br />
<br />
<u>workshop and tutorials</u><br />
<a href="http://projects.csail.mit.edu/church/wiki/Probabilistic_Models_of_Cognition">http://projects.csail.mit.edu/church/wiki/Probabilistic_Models_of_Cognition</a><br />
<a href="http://projects.csail.mit.edu/church/wiki/Church">http://projects.csail.mit.edu/church/wiki/Church</a> <br />
<a href="http://probabilistic-programming.org/wiki/NIPS*2012_Workshop">http://probabilistic-programming.org/wiki/NIPS*2012_Workshop</a><br />
<a href="http://nbviewer.ipython.org/urls/raw.github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter1_Introduction/Chapter1_Introduction.ipynb">http://nbviewer.ipython.org/urls/raw.github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter1_Introduction/Chapter1_Introduction.ipynb</a><br />
<br />
<div style="text-align: left;">
<u>Probabilistic Programming and Bayesian Methods for Hackers <i>Using Python and PyMC</i></u>
</div>
<a href="https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers">https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers</a><br />
<u>Collection of ipython notebooks</u> <br />
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/00635176017294278878noreply@blogger.com12