<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>GW labs</title><link href="http://gw.tnode.com/" rel="alternate"></link><link href="http://gw.tnode.com/feeds/all.atom.xml" rel="self"></link><id>http://gw.tnode.com/</id><updated>2017-02-07T00:00:00+01:00</updated><entry><title>Docker Machine with USB support on Windows/macOS</title><link href="http://gw.tnode.com/docker/docker-machine-with-usb-support-on-windows-macos/" rel="alternate"></link><updated>2017-02-07T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2017-02-01:docker/docker-machine-with-usb-support-on-windows-macos/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="docker-machine-with-usb-support-on-windowsmacos"&gt;Docker Machine with USB support on Windows/macOS&lt;/h2&gt;
&lt;p&gt;The only way to make USB devices available in &lt;em&gt;Docker&lt;/em&gt; containers under &lt;em&gt;Windows&lt;/em&gt; or &lt;em&gt;macOS&lt;/em&gt; is to use the &lt;a href="https://github.com/docker/machine"&gt;&lt;em&gt;Docker Machine&lt;/em&gt;&lt;/a&gt; that comes with the usual installer and uses &lt;em&gt;VirtualBox&lt;/em&gt;. This is not needed on a &lt;em&gt;Linux&lt;/em&gt; host OS with x86_64 architecture.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download and install &lt;em&gt;Docker for Windows&lt;/em&gt; or &lt;em&gt;Docker for Mac&lt;/em&gt; (&lt;em&gt;Docker Machine&lt;/em&gt;) from &lt;a href="https://www.docker.com/products/docker"&gt;https://www.docker.com/products/docker&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and install &lt;em&gt;VirtualBox 5.1+&lt;/em&gt; from &lt;a href="https://www.virtualbox.org/wiki/Downloads"&gt;https://www.virtualbox.org/wiki/Downloads&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally, to make flashing faster also download and install &lt;em&gt;VirtualBox Extension Pack&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize the &lt;em&gt;Docker Machine&lt;/em&gt; as a virtual machine (VM) called &lt;code&gt;default&lt;/code&gt; from the console:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker-machine&lt;/span&gt; create --driver virtualbox default
&lt;span class="kw"&gt;Running&lt;/span&gt; pre-create checks...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;No&lt;/span&gt; default Boot2Docker ISO found locally, downloading the latest release...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Latest&lt;/span&gt; release for github.com/boot2docker/boot2docker is v1.13.0
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Downloading&lt;/span&gt; /Users/ami/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.13.0/boot2docker.iso...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%&lt;/span&gt;
&lt;span class="kw"&gt;Creating&lt;/span&gt; machine...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Copying&lt;/span&gt; /Users/ami/.docker/machine/cache/boot2docker.iso to /Users/ami/.docker/machine/machines/default/boot2docker.iso...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Creating&lt;/span&gt; VirtualBox VM...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Creating&lt;/span&gt; SSH key...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Starting&lt;/span&gt; the VM...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Check&lt;/span&gt; network to re-create if needed...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Waiting&lt;/span&gt; for an IP...
&lt;span class="kw"&gt;Waiting&lt;/span&gt; for machine to be running, this may take a few minutes...
&lt;span class="kw"&gt;Detecting&lt;/span&gt; operating system of created instance...
&lt;span class="kw"&gt;Waiting&lt;/span&gt; for SSH to be available...
&lt;span class="kw"&gt;Detecting&lt;/span&gt; the provisioner...
&lt;span class="kw"&gt;Provisioning&lt;/span&gt; with boot2docker...
&lt;span class="kw"&gt;Copying&lt;/span&gt; certs to the local machine directory...
&lt;span class="kw"&gt;Copying&lt;/span&gt; certs to the remote machine...
&lt;span class="kw"&gt;Setting&lt;/span&gt; Docker configuration on the remote daemon...
&lt;span class="kw"&gt;Checking&lt;/span&gt; connection to Docker...
&lt;span class="kw"&gt;Docker&lt;/span&gt; is up and running!
&lt;span class="kw"&gt;To&lt;/span&gt; see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env default
$ &lt;span class="kw"&gt;docker-machine&lt;/span&gt; ls
&lt;span class="kw"&gt;NAME&lt;/span&gt;      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
&lt;span class="kw"&gt;default&lt;/span&gt;   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.13.0&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enable USB support in VirtualBox for your VM &lt;code&gt;default&lt;/code&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you installed the &lt;em&gt;VirtualBox Extension Pack&lt;/em&gt;, apply the &lt;code&gt;--usbxhci on&lt;/code&gt; option instead of &lt;code&gt;--usb on&lt;/code&gt; .&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker-machine&lt;/span&gt; stop
&lt;span class="kw"&gt;Stopping&lt;/span&gt; &lt;span class="st"&gt;"default"&lt;/span&gt;...
&lt;span class="kw"&gt;Machine&lt;/span&gt; &lt;span class="st"&gt;"default"&lt;/span&gt; was stopped.
$ &lt;span class="kw"&gt;vboxmanage&lt;/span&gt; modifyvm default --usb on
$ &lt;span class="co"&gt;#vboxmanage modifyvm default --usbxhci on&lt;/span&gt;
$ &lt;span class="kw"&gt;docker-machine&lt;/span&gt; start
&lt;span class="kw"&gt;Starting&lt;/span&gt; &lt;span class="st"&gt;"default"&lt;/span&gt;...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Check&lt;/span&gt; network to re-create if needed...
&lt;span class="kw"&gt;(default)&lt;/span&gt; &lt;span class="kw"&gt;Waiting&lt;/span&gt; for an IP...
&lt;span class="kw"&gt;Machine&lt;/span&gt; &lt;span class="st"&gt;"default"&lt;/span&gt; was started.
&lt;span class="kw"&gt;Waiting&lt;/span&gt; for SSH to be available...
&lt;span class="kw"&gt;Detecting&lt;/span&gt; the provisioner...
&lt;span class="kw"&gt;Started&lt;/span&gt; machines may have new IP addresses. You may need to re-run the &lt;span class="kw"&gt;`docker-machine&lt;/span&gt; env&lt;span class="kw"&gt;`&lt;/span&gt; command.&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One way to attach USB devices in your VM &lt;code&gt;default&lt;/code&gt; is to add filter rules that automatically attach USB devices:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; First command determines which USB devices are available on your host, following commands add filter rules that automatically attach USB devices with a matching vendor id and product id. Afterwards you will need to unplug and plug those USB devices back in (or restart your VM), to trigger the attach filters.&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;vboxmanage&lt;/span&gt; list usbhost
&lt;span class="kw"&gt;Host&lt;/span&gt; USB Devices:

&lt;span class="kw"&gt;UUID&lt;/span&gt;:               5b23cdc4-cfc7-49c4-81f6-844b1378cdf1
&lt;span class="kw"&gt;VendorId&lt;/span&gt;:           0x0403 (0403)
&lt;span class="kw"&gt;ProductId&lt;/span&gt;:          0x6001 (6001)
&lt;span class="kw"&gt;Revision&lt;/span&gt;:           6.0 (0600)
&lt;span class="kw"&gt;Port&lt;/span&gt;:               2
&lt;span class="kw"&gt;USB&lt;/span&gt; version/speed:  0/Full
&lt;span class="kw"&gt;Manufacturer&lt;/span&gt;:       FTDI
&lt;span class="kw"&gt;Product&lt;/span&gt;:            TTL232R-3V3
&lt;span class="kw"&gt;SerialNumber&lt;/span&gt;:       FTGI13S5
&lt;span class="kw"&gt;Address&lt;/span&gt;:            p=0x6001&lt;span class="kw"&gt;;&lt;/span&gt;&lt;span class="ot"&gt;v=&lt;/span&gt;0x0403;&lt;span class="ot"&gt;s=&lt;/span&gt;0x0000f62c7b7304f3;&lt;span class="ot"&gt;l=&lt;/span&gt;0x14212000
&lt;span class="kw"&gt;Current&lt;/span&gt; State:      Busy

$ &lt;span class="kw"&gt;vboxmanage&lt;/span&gt; usbfilter add 0 --target default --name &lt;span class="st"&gt;'FTDI TTL232R-3V3'&lt;/span&gt; --vendorid 0x0403 --productid 0x6001&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Alternatively, you can attach USB devices manually:
&lt;ul&gt;
&lt;li&gt;open the &lt;em&gt;VirtualBox&lt;/em&gt; application&lt;/li&gt;
&lt;li&gt;find your VM &lt;code&gt;default&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;select &lt;em&gt;Show&lt;/em&gt; or double-click on it&lt;/li&gt;
&lt;li&gt;wait for the VM window to show up&lt;/li&gt;
&lt;li&gt;select &lt;em&gt;Devices/USB&lt;/em&gt; from the menu and enable what devices you want to use from the host in the guest (and consequently in &lt;em&gt;Docker&lt;/em&gt; containers)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;If you want to tweak something in your VM &lt;code&gt;default&lt;/code&gt; itself, you can SSH into it with:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker-machine&lt;/span&gt; ssh&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Activate correct environment to use the &lt;em&gt;Docker Machine&lt;/em&gt; in VM &lt;code&gt;default&lt;/code&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;: Do not forget to run this &lt;strong&gt;each time you open a new console&lt;/strong&gt; and want to begin using &lt;em&gt;Docker&lt;/em&gt;!!!&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;eval&lt;/span&gt; &lt;span class="st"&gt;"&lt;/span&gt;&lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;docker-machine&lt;/span&gt; env default&lt;span class="ot"&gt;)&lt;/span&gt;&lt;span class="st"&gt;"&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Now you are ready to use &lt;em&gt;Docker&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/machine/overview/"&gt;https://docs.docker.com/machine/overview/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/machine/get-started/"&gt;https://docs.docker.com/machine/get-started/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gist.github.com/stonehippo/e33750f185806924f1254349ea1a4e68"&gt;https://gist.github.com/stonehippo/e33750f185806924f1254349ea1a4e68&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="docker"></category><category term="windows"></category><category term="macos"></category><category term="install"></category></entry><entry><title>Docker dovecot-getmail</title><link href="http://gw.tnode.com/docker/dovecot-getmail/" rel="alternate"></link><updated>2016-12-16T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-12-01:docker/dovecot-getmail/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-dovecot-getmail&lt;/em&gt;&lt;/strong&gt; is a &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image based on &lt;em&gt;Debian 8&lt;/em&gt; implementing a private email gateway with &lt;a href="http://en.wikipedia.org/wiki/Dovecot_(software)"&gt;&lt;em&gt;dovecot&lt;/em&gt;&lt;/a&gt; and &lt;a href="http://en.wikipedia.org/wiki/Getmail"&gt;&lt;em&gt;getmail&lt;/em&gt;&lt;/a&gt; for gathering emails from multiple accounts on a private server (IMAP), but using a public email infrastructure for sending (SMTP).&lt;/p&gt;
&lt;p&gt;It is a &lt;em&gt;Docker&lt;/em&gt; container realizing a similar architecture to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://joel.porquet.org/wiki/hacking/getmail_dovecot/"&gt;http://joel.porquet.org/wiki/hacking/getmail_dovecot/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;+-----------+              +-----------+               +--------------+
| ISP       |              | DOCKER    |               | LAPTOP       |
|           |              |           |           +--&amp;gt;|--------------|
| +-------+ | push/delete  | +-------+ | push/sync |   |  MAIL CLIENT +---+
| | IMAPS +-----------------&amp;gt;| IMAPS +&amp;lt;------------+   +--------------+   |
| +-------+ |              | +-------+ |           |   +--------------+   |
| +-------+ |              |           |           |   | ANDROID      |   |
| | SMTP  |&amp;lt;-------+       |           |           +--&amp;gt;|--------------|   |
| +-------+ |      |       |           |               |  MAIL CLIENT +---+
+-----------+      |       +-----------+               +--------------+   |
                   +------------------------------------------------------+&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/dovecot-getmail/"&gt;http://gw.tnode.com/docker/dovecot-getmail/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/docker-dovecot-getmail/"&gt;http://github.com/gw0/docker-dovecot-getmail/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;debian&lt;/em&gt;, &lt;em&gt;dovecot&lt;/em&gt;, &lt;em&gt;getmail&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="https://hub.docker.com/r/gw000/dovecot-getmail/"&gt;https://hub.docker.com/r/gw000/dovecot-getmail/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;/home&lt;/code&gt;: mounted users directories (&lt;code&gt;Maildir&lt;/code&gt; in fs layout, &lt;code&gt;sieve&lt;/code&gt;, &lt;code&gt;.getmail&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/etc/cron.d&lt;/code&gt;: mounted crontabs for executing all getmail accounts&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/etc/ssl/private&lt;/code&gt;: mounted SSL/TLS certificates (&lt;code&gt;dovecot.crt&lt;/code&gt;, &lt;code&gt;dovecot.key&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prepare your getmailrc account configurations per user (&lt;code&gt;/srv/mail/home/user/.getmail/getmailrc-user@email.invalid&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# ~/.getmail/getmailrc-*: getmailrc email configuration

[retriever]
type = SimpleIMAPSSLRetriever
server = imap.email.invalid
username = user@email.invalid
port = 993
password = password
mailboxes = ("INBOX", "Sent", "Spam")

[destination]
type = MDA_external
path = /usr/lib/dovecot/deliver
arguments = ("-e",)

[options]
read_all = false
delete_after = 30
delivered_to = false
received = true
verbose = 1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you are using Sieve filters and want a &lt;code&gt;Refilter&lt;/code&gt; mailbox to trigger their refiltering, create a refilter configuration per user (&lt;code&gt;/srv/mail/home/user/.getmail/getmailrc-refilter&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# ~/.getmail/getmailrc-*: getmailrc refilter configuration

[retriever]
type = SimpleIMAPRetriever
server = localhost
port = 143
username = user
password = password
mailboxes = ("Refilter",)

[destination]
type = MDA_external
path = /usr/lib/dovecot/deliver
arguments = ("-e",)

[options]
read_all = false
delete = true
delivered_to = false
received = false
verbose = 1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Prepare crontab file (&lt;code&gt;/srv/mail/cron.d/getmail&lt;/code&gt;) for periodically checking for new mail for each user and account:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# /etc/cron.d/getmail: system-wide crontab for getmail
SHELL=/bin/sh

# m h dom mon dow user  command
*/20 *  *   *   * user  ACC="user-refilter" &amp;amp;&amp;amp; (date; flock -n ~/.getmail/lock-$ACC getmail --rcfile="getmailrc-$ACC" --idle Refilter) &amp;gt;&amp;gt;"/var/log/getmail/$ACC.log" 2&amp;gt;&amp;amp;1
*/20 *  *   *   * user  ACC="user@email.invalid" &amp;amp;&amp;amp; (date; flock -n ~/.getmail/lock-$ACC getmail --rcfile="getmailrc-$ACC" --idle INBOX) &amp;gt;&amp;gt;"/var/log/getmail/$ACC.log" 2&amp;gt;&amp;amp;1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Do not forget to place your SSL certificates as &lt;code&gt;/srv/mail/ssl/dovecot.crt&lt;/code&gt; and &lt;code&gt;/srv/mail/ssl/dovecot.key&lt;/code&gt;. SSL is required!&lt;/p&gt;
&lt;p&gt;And finally start it with &lt;em&gt;docker&lt;/em&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d -v /srv/mail/home:/home -v /srv/mail/cron.d:/etc/cron.d -v /srv/mail/ssl:/etc/ssl/private:ro -p 143 -p 993 -p 4190 --name mail gw000/dovecot-getmail&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or use &lt;em&gt;docker-compose&lt;/em&gt; (check out &lt;code&gt;docker-compose.example.yml&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Users are created automatically with default password (&lt;code&gt;replaceMeNow&lt;/code&gt;) on first start. To reset user passwords (of a running container):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; exec -it mail passwd user&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/docker-dovecot-getmail/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/gw0/docker-dovecot-getmail/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2016 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#54;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2016 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
</summary><category term="docker"></category><category term="email"></category><category term="image"></category></entry><entry><title>Discourse Sense Classification from Scratch using Focused RNNs</title><link href="http://gw.tnode.com/deep-learning/conll2016-presentation/" rel="alternate"></link><updated>2016-08-12T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-08-12:deep-learning/conll2016-presentation/</id><summary type="html">&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;div class="orange" style="text-align:left;"&gt;
*CoNLL 2016, Berlin*
&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;

&lt;div style="text-align:center;"&gt;
### Discourse Sense Classification
### from Scratch using Focused RNNs
&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;

&lt;div style="text-align:right;"&gt;
&lt;span class="green"&gt;*&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;*&lt;/span&gt;
&lt;br /&gt;
[&lt;http://gw.tnode.com/&gt;]
&lt;br /&gt;
&amp;lt;&lt;gw.2016@tnode.com&gt;&amp;gt;
&lt;/div&gt;&lt;!-- --&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## CoNLL 2016 Shared Task

- multilingual **shallow discourse parsing**
  - PDTB for English
    - 32535 relations, 23 senses, 43918 different words
  - CDTB for Chinese
    - 10240 relations, 12 senses, 14785 different words
  - evaluation on never seen test and blind test datasets
- subtask of discourse relation **sense classification**
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Traditional NLP approach

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-traditional.png" width="100%" alt="Traditional NLP approach to discourse relation sense classification." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### From scratch

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-scratch.png" width="100%" alt="Approach from scratch for discourse relation sense classification." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Our system overview

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-overview.png" width="70%" alt="General system overview with two discourse sense classifiers." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Main components

- **two separate models** with different parameters, but same implementation
- **input** consists of tokenized text spans
- **word embeddings** are trained from scratch
- **focused RNNs** specialize multiple RNNs for different aspects of text spans
- **classification** with a feed-forward neural network
- **end-to-end training** for language-independence
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Individual model

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-model.png" width="60%" alt="Our individual discourse sense classifier/model." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Input

- only tokenized **text spans**
  - &lt;span class="blue"&gt;*arg1_ids*&lt;/span&gt;, &lt;span class="red"&gt;*arg2_ids*&lt;/span&gt;, &lt;span class="green"&gt;*conn_ids*&lt;/span&gt;, &lt;span&gt;*punc_ids*&lt;/span&gt;
- no preprocessing (no stemming, preserve stopwords, invalid characters, cases, consider all tokens)
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Word embeddings

- trainable **lookup table**
  - random uniform initialization
- no pre-trained embeddings (no word2vec)
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Focused RNNs

- proposed novel approach
- **specialize multiple RNNs** for different aspects of texts
- **focus RNN**
  - produces a sequence of focus weights that are used to scale the inputs of other RNNs
  - one focus dimension influences one RNN
  - forward GRU layers with 8-16 focus dimensions
- similar to multi-attention mechanisms, but without constructing a query vector to direct attention
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Focused RNNs architecture

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-focused-rnns.png" width="70%" alt="Diagram of the focused RNN architecture." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Classification

- **feed-forward neural network**
  - 1 hidden layer with SReLu activation
  - 1 output layer with softmax activation
- outputs relation sense or nonsense probabilities
  - ***relation_senses***
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### End-to-end training

- categorical cross-entropy loss
- backpropagation with Adam optimizer
- mini-batch updates
- **regularization**
  - dropout layers
  - negative random samples (nonsense)
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Official results on English

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-en-overall.jpg" width="50%" alt="Overall F1-measures of discourse relation sense classification evaluated on different relation types on English datasets." /&gt;&lt;/figure&gt;

- $F_1$-measures on PDTB dataset
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Per-sense results on English

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-en-all-per-sense.jpg" width="50%" alt="Per-sense F1-measures of discourse relation sense classification evaluated on all relations on English datasets." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Official results on Chinese

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-zh-overall.jpg" width="50%" alt="Overall F1-measures of discourse relation sense classification evaluated on different relation types on Chinese datasets." /&gt;&lt;/figure&gt;

- $F_1$-measures on CDTB dataset
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Per-sense results on Chinese

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-zh-all-per-sense.jpg" width="50%" alt="Per-sense F1-measures of discourse relation sense classification evaluated on all relations on Chinese datasets." /&gt;&lt;/figure&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Work in progress

- detailed analysis
  - comparable performance with a single model*
  - impact of parameters
  - bidirectional RNNs
  - visualization of focus
- focused RNNs on other NLP tasks
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Implementation

- *Keras* (*Theano* or *Tensorflow*)

- *Docker* container to simplify experiments

  ```docker run -it gw000/keras-full ipython```

- source code released

  &amp;lt;&lt;http://github.com/gw0/conll16st-v34-focused-rnns&gt;&amp;gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Questions?

&lt;figure&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/conll16st-v34-model.png" width="48%" alt="Our individual discourse sense classifier/model." /&gt;&lt;/figure&gt;

&lt;div style="text-align:right;"&gt;
&lt;span class="green"&gt;*&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;*&lt;/span&gt;
&lt;br /&gt;
[&lt;http://gw.tnode.com/&gt;]
&lt;br /&gt;
&amp;lt;&lt;gw.2016@tnode.com&gt;&amp;gt;
&lt;/div&gt;&lt;!-- --&gt;
&lt;/script&gt;&lt;/section&gt;
</summary><category term="deep learning"></category><category term="nlp"></category><category term="presentation"></category></entry><entry><title>Micro Drone 3.0 Camera API</title><link href="http://gw.tnode.com/drone/micro-drone-3-0-camera-api/" rel="alternate"></link><updated>2016-07-30T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-07-30:drone/micro-drone-3-0-camera-api/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Micro Drone 3.0 logo" height="60" src="http://gw.tnode.com/drone/img/md3.0-logo.jpg" width="450"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="http://microdrone.co.uk/"&gt;&lt;em&gt;Micro Drone 3.0&lt;/em&gt;&lt;/a&gt; is an affordable feature-packed mini quadcopter. It can be controlled either using the 2.4 GHz remote controller (receiver on main board) or an Android/iPhone app over WiFi (through the camera module). Although the project was crowdfunded on Indiegogo, the software is not open source and no API documentation is available. This is an attempt to reverse engineer the main parts of the Camera API, so anyone can experiment further.&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;img alt="Micro Drone 3.0 photo" height="300" src="http://gw.tnode.com/drone/img/md3.0-photo.jpg" width="533"/&gt;
&lt;/figure&gt;
&lt;h3 id="general"&gt;General&lt;/h3&gt;
&lt;p&gt;The Micro Drone 3.0 Camera module takes around 20 seconds to boot when you attach it to the battery. It creates an WiFi access point (eg. &lt;code&gt;MD3.0_ABCD&lt;/code&gt;) to which you need to connect your phone before starting the Micro Drone app.&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;img alt="Micro Drone 3.0 Android app screenshot" height="300" src="http://gw.tnode.com/drone/img/md3.0-android-fly-screenshot.jpg" width="533"/&gt;
&lt;/figure&gt;
&lt;p&gt;The Camera module is on &lt;code&gt;192.168.1.1&lt;/code&gt; and provides the following services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;67/udp&lt;/em&gt;: DHCP daemon (&lt;code&gt;udhcpd&lt;/code&gt; from &lt;em&gt;BusyBox v1.16.1&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;80/tcp&lt;/em&gt;: HTTP server and controller (&lt;code&gt;/bin/camera&lt;/code&gt; using &lt;em&gt;Boa/0.94.14rc21&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;2345/tcp&lt;/em&gt;: unknown (&lt;code&gt;/bin/camera&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;10000/udp&lt;/em&gt;: unknown (&lt;code&gt;/bin/camera&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;23/tcp&lt;/em&gt;: telnet daemon with root shell (&lt;code&gt;telnetd&lt;/code&gt;) (needs to be activated)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Android/iPhone app communicates for camera operations, video streaming and serial remote control only over port 80/tcp to the main daemon (&lt;code&gt;/bin/camera&lt;/code&gt;). On the same port this daemon exposes an HTTP interface, but also a custom protocol for streaming video and serial remote control commands.&lt;/p&gt;
&lt;h3 id="http-api"&gt;HTTP API&lt;/h3&gt;
&lt;p&gt;Over HTTP a &lt;em&gt;Reecam&lt;/em&gt; CGI interface is available (user &lt;code&gt;admin&lt;/code&gt; without any password), but it is easier to use the handy &lt;a href="http://github.com/larsks/mdcam"&gt;&lt;em&gt;mdcam&lt;/em&gt; tool&lt;/a&gt; to work with the camera.&lt;/p&gt;
&lt;p&gt;To download all photos and videos to your computer with the &lt;em&gt;mdcam&lt;/em&gt; tool:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mdcam&lt;/span&gt; ls &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;cut&lt;/span&gt; -d&lt;span class="st"&gt;' '&lt;/span&gt; -f1 &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -n 1 mdcam download&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The full HTTP API is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;/get_status.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_params.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_properties.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/check_user.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_badauth.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/create_session.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/close_session.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/is_mjpeg_stream_exist.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/request_av.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/set_params.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/snapshot.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/videostream.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/ptz_control.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_log.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/wifi_scan.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_session_list.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/search_record.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/del_record.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_record.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/start_record.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/stop_record.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/format_sd.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/backup.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/restore.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/restore_factory.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/delete_factory.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/upgrade.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/write_comm.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/read_comm.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/test_wifi_connected.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/alarm_snapshots.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/search_snapshot.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/del_snapshot.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_snapshot.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/set_stream.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/get_cur_ir_adc_value.cgi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/wifi_ate_tx_start.cgi&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get a live video stream open the following network URL in &lt;em&gt;VLC media player&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://192.168.1.1/av.asf"&gt;http://192.168.1.1/av.asf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="telnet-shell"&gt;Telnet shell&lt;/h3&gt;
&lt;p&gt;To activate the Telnet interface enable it though the HTTP API by visiting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://192.168.1.1/set_params.cgi?telnetd=1&amp;amp;save=1&amp;amp;reboot=1"&gt;http://192.168.1.1/set_params.cgi?telnetd=1&amp;amp;save=1&amp;amp;reboot=1&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After a reboot you can connect over telnet into a root shell with &lt;em&gt;BusyBox v1.16.1&lt;/em&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;telnet&lt;/span&gt; 192.168.1.1
&lt;span class="kw"&gt;Trying&lt;/span&gt; 192.168.1.1...
&lt;span class="kw"&gt;Connected&lt;/span&gt; to 192.168.1.1.
&lt;span class="kw"&gt;Escape&lt;/span&gt; character is &lt;span class="st"&gt;'^]'&lt;/span&gt;.

&lt;span class="co"&gt;# uname -a&lt;/span&gt;
&lt;span class="kw"&gt;Linux&lt;/span&gt; (none) &lt;span class="kw"&gt;3.0.8&lt;/span&gt; &lt;span class="co"&gt;#1723 Wed Nov 25 16:49:38 CST 2015 armv5tejl GNU/Linux&lt;/span&gt;
&lt;span class="co"&gt;# cat /proc/mtd &lt;/span&gt;
&lt;span class="kw"&gt;dev&lt;/span&gt;:    size   erasesize  name
&lt;span class="kw"&gt;mtd0&lt;/span&gt;: 00100000 00010000 &lt;span class="st"&gt;"boot"&lt;/span&gt;
&lt;span class="kw"&gt;mtd1&lt;/span&gt;: 00500000 00010000 &lt;span class="st"&gt;"kernel"&lt;/span&gt;
&lt;span class="kw"&gt;mtd2&lt;/span&gt;: 00080000 00010000 &lt;span class="st"&gt;"user"&lt;/span&gt;
&lt;span class="kw"&gt;mtd3&lt;/span&gt;: 00180000 00010000 &lt;span class="st"&gt;"manufacturer"&lt;/span&gt;
&lt;span class="co"&gt;# df&lt;/span&gt;
&lt;span class="kw"&gt;Filesystem&lt;/span&gt;           1K-blocks      Used Available Use% Mounted on
&lt;span class="kw"&gt;tmpfs&lt;/span&gt;                    18056         4     18052   0% /dev
&lt;span class="kw"&gt;/dev/mtdblock2&lt;/span&gt;             512       252       260  49% /mnt/user
&lt;span class="kw"&gt;/dev/mtdblock3&lt;/span&gt;            1536       220      1316  14% /mnt/manufacturer
&lt;span class="kw"&gt;/dev/mmcblk0p1&lt;/span&gt;        15621208   1151032  14470176   7% /mnt/sd
&lt;span class="co"&gt;# mount&lt;/span&gt;
&lt;span class="kw"&gt;rootfs&lt;/span&gt; on / type rootfs (rw)
&lt;span class="kw"&gt;proc&lt;/span&gt; on /proc type proc (rw,relatime)
&lt;span class="kw"&gt;sysfs&lt;/span&gt; on /sys type sysfs (rw,relatime)
&lt;span class="kw"&gt;tmpfs&lt;/span&gt; on /dev type tmpfs (rw,relatime)
&lt;span class="kw"&gt;devpts&lt;/span&gt; on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
&lt;span class="kw"&gt;/dev/mtdblock2&lt;/span&gt; on /mnt/user type jffs2 (rw,relatime)
&lt;span class="kw"&gt;/dev/mtdblock3&lt;/span&gt; on /mnt/manufacturer type jffs2 (rw,relatime)
&lt;span class="kw"&gt;/dev/mmcblk0p1&lt;/span&gt; on /mnt/sd type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It is a little tricky to retrieve the &lt;code&gt;/bin/camera&lt;/code&gt; binary, because it removes itself after starting. Nevertheless it is still loaded in the memory, so you can find it in &lt;code&gt;/proc/*/exe&lt;/code&gt;. It seems that they expected someone to examine this binary, because it contains a string &lt;code&gt;hello kitty and kgb/cia 2011 COPYRIGHT@REECAM 5460&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="serial-interface"&gt;Serial interface&lt;/h3&gt;
&lt;p&gt;The Android/iPhone app also manages to send remote control commands the drone. This is done through a serial interface that seems to be directly exposed to the app through &lt;code&gt;libcamlib.so&lt;/code&gt; on the client side and &lt;code&gt;/bin/camera&lt;/code&gt; on the server side.&lt;/p&gt;
&lt;p&gt;Disassembling the Android APK reveals that &lt;code&gt;com.D_Lawliet.fly.FlyActivity&lt;/code&gt; contains the &lt;code&gt;targets = new byte[7]&lt;/code&gt; property that encodes the serial commands that are being sent with each packet. Meaning of those bytes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;targets[0]&lt;/code&gt;: -6&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[1]&lt;/code&gt;: throttle (0 is off)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[2]&lt;/code&gt;: rudder (min 1?)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[3]&lt;/code&gt;: elevation (1..127)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[4]&lt;/code&gt;: aileron (1..127)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[5]&lt;/code&gt;: 0 + settings (speed mode, inverted flying)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets[6]&lt;/code&gt;: checksum (&lt;code&gt;[1]&lt;/code&gt; xor &lt;code&gt;[2]&lt;/code&gt; xor &lt;code&gt;[3]&lt;/code&gt; xor &lt;code&gt;[4] xor&lt;/code&gt;[5]`)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The target serial device for this commands is &lt;code&gt;/dev/ttyAMA1&lt;/code&gt;. Proof of concept that triggers throttle for a few seconds:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;&lt;span class="co"&gt;# echo -e "\xfa\x40\x40\x40\x40\x00\x00" &amp;gt; /dev/ttyAMA1&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://microdrone.co.uk/"&gt;http://microdrone.co.uk/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/larsks/mdcam"&gt;http://github.com/larsks/mdcam&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://wiki.reecam.cn/CGI/Overview"&gt;http://wiki.reecam.cn/CGI/Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.utest.com/articles/iot-security-hacking-a-drone-camera-to-spread-malware-part-1"&gt;http://www.utest.com/articles/iot-security-hacking-a-drone-camera-to-spread-malware-part-1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://hackaday.io/project/12119-unlocking-the-power-of-the-micro-drone-30"&gt;http://hackaday.io/project/12119-unlocking-the-power-of-the-micro-drone-30&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="drone"></category><category term="hack"></category></entry><entry><title>Discourse Sense Classification from Scratch using Focused RNNs</title><link href="http://gw.tnode.com/deep-learning/conll2016-discourse-sense-classification-from-scratch-using-focused-rnns/" rel="alternate"></link><updated>2016-08-12T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-06-20:deep-learning/conll2016-discourse-sense-classification-from-scratch-using-focused-rnns/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="CoNLL 2016 logo" height="72" src="http://gw.tnode.com/deep-learning/img/conll2016-logo.jpg" width="450"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="conference-proceeding"&gt;Conference proceeding&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;G. Weiss and M. Bajec, “&lt;strong&gt;Discourse Sense Classification from Scratch using Focused RNNs&lt;/strong&gt;,” in Proceedings of the Twentieth Conference on Computational Natural Language Learning - Shared Task, 2016, pp. 50–54.&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; &lt;a href="http://www.conll.org/"&gt;conference&lt;/a&gt;, &lt;a href="http://aclweb.org/anthology/K/K16/"&gt;proceedings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/conll2016weiss-paper.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/conll2016weiss-presentation.pdf"&gt;presentation&lt;/a&gt;, &lt;a href="http://gw.tnode.com/deep-learning/conll2016-presentation/"&gt;online&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/conll2016weiss.bib"&gt;bibtex&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; &lt;a href="http://github.com/gw0/conll16st-v34-focused-rnns"&gt;code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;The subtask of &lt;em&gt;CoNLL 2016 Shared Task&lt;/em&gt; focuses on sense classification of multilingual shallow discourse relations. Existing systems rely heavily on external resources, hand-engineered features, patterns, and complex pipelines fine-tuned for the English language. In this paper we describe a different approach and system inspired by end-to-end training of deep neural networks. Its input consists of only sequences of tokens, which are processed by our novel focused RNNs layer, and followed by a dense neural network for classification. Neural networks implicitly learn latent features useful for discourse relation sense classification, make the approach almost language-agnostic and independent of prior linguistic knowledge. In the closed-track sense classification our system achieved overall &lt;em&gt;0.5246&lt;/em&gt; F1-measure on English blind dataset and achieved the new state-of-the-art of &lt;em&gt;0.7292&lt;/em&gt; F1-measure on Chinese blind dataset.&lt;/p&gt;
&lt;figure&gt;
&lt;img alt="Our CoNLL 2016 Shared Task individual discourse sense classifier/model." height="484" src="http://gw.tnode.com/deep-learning/img/conll16st-v34-model.png" width="600"/&gt;
&lt;/figure&gt;
</summary><category term="deep learning"></category><category term="nlp"></category><category term="conference"></category><category term="paper"></category></entry><entry><title>Docker debian-cuda</title><link href="http://gw.tnode.com/docker/debian-cuda/" rel="alternate"></link><updated>2016-12-21T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-06-16:docker/debian-cuda/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-debian-cuda&lt;/em&gt;&lt;/strong&gt; is a minimal &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image built from &lt;em&gt;Debian 9&lt;/em&gt; (amd64) with &lt;a href="http://developer.nvidia.com/cuda-toolkit"&gt;&lt;em&gt;CUDA Toolkit&lt;/em&gt;&lt;/a&gt; and &lt;em&gt;cuDNN&lt;/em&gt; using only Debian packages.&lt;/p&gt;
&lt;p&gt;Although the &lt;em&gt;nvidia-docker&lt;/em&gt; tool can run CUDA inside Docker images, it uses yet another wrapper command and is based on Ubuntu images. To make the whole process more transparent, we explicitly expose GPU devices and build from official Debian images. All installations are performed through the Debian package manager, also because the official Nvidia CUDA Toolkit does not support Debian without hacks.&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/debian-cuda/"&gt;http://gw.tnode.com/docker/debian-cuda/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/docker-debian-cuda/"&gt;http://github.com/gw0/docker-debian-cuda/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;debian&lt;/em&gt;, &lt;em&gt;cuda toolkit&lt;/em&gt;, &lt;em&gt;opencl&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="http://hub.docker.com/r/gw000/debian-cuda/"&gt;http://hub.docker.com/r/gw000/debian-cuda/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Available tags:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;8.0.44-2_5.1.5-1_375.20-4&lt;/code&gt;, &lt;code&gt;8.0_5.1&lt;/code&gt;, &lt;code&gt;latest&lt;/code&gt; [2016-12-21]: &lt;em&gt;CUDA Toolkit&lt;/em&gt; &lt;small&gt;(8.0.44-2)&lt;/small&gt; + &lt;em&gt;cuDNN&lt;/em&gt; &lt;small&gt;(5.1.5-1)&lt;/small&gt; + &lt;em&gt;CUDA library&lt;/em&gt; &lt;small&gt;(375.20-4)&lt;/small&gt; (&lt;a href="http://github.com/gw0/docker-debian-cuda/blob/master/Dockerfile"&gt;&lt;em&gt;Dockerfile&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;7.5.18-4_5.1.3_361.45.18-2&lt;/code&gt;, &lt;code&gt;7.5_5.1&lt;/code&gt; [2016-09-19]: &lt;em&gt;CUDA Toolkit&lt;/em&gt; &lt;small&gt;(7.5.18-4)&lt;/small&gt; + &lt;em&gt;cuDNN&lt;/em&gt; &lt;small&gt;(5.1.3)&lt;/small&gt; + &lt;em&gt;CUDA library&lt;/em&gt; &lt;small&gt;(361.45.18-2)&lt;/small&gt; (&lt;a href="http://github.com/gw0/docker-debian-cuda/blob/master/Dockerfile"&gt;&lt;em&gt;Dockerfile&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;7.5.18-2&lt;/code&gt; [2016-07-20]: &lt;em&gt;CUDA Toolkit&lt;/em&gt; &lt;small&gt;(7.5.18-2)&lt;/small&gt; + &lt;em&gt;cuDNN&lt;/em&gt; &lt;small&gt;(4.0.7)&lt;/small&gt; + &lt;em&gt;CUDA library&lt;/em&gt; &lt;small&gt;(352.79-8)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Host system requirements (eg. Debian 8 or 9):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CUDA-capable GPU&lt;/li&gt;
&lt;li&gt;&lt;em&gt;nvidia-kernel-dkms&lt;/em&gt; &lt;small&gt;(same as CUDA library)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;optionally &lt;em&gt;nvidia-cuda-mps&lt;/em&gt;, &lt;em&gt;nvidia-smi&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To utilize your GPUs this Docker image needs access to your &lt;code&gt;/dev/nvidia*&lt;/code&gt; devices, like:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /dev/nvidia* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'--device={}'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; gw000/debian-cuda&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="host-system"&gt;Host system&lt;/h2&gt;
&lt;p&gt;List of devices that should be present on the host system:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;ll&lt;/span&gt; /dev/nvidia*
&lt;span class="kw"&gt;crw-rw----&lt;/span&gt; 1 root video 250,   0 Jul 13 15:56 /dev/nvidia-uvm
&lt;span class="kw"&gt;crw-rw----&lt;/span&gt; 1 root video 250,   1 Jul 13 15:56 /dev/nvidia-uvm-tools
&lt;span class="kw"&gt;crw-rw----&lt;/span&gt; 1 root video 195,   0 Jul 13 15:56 /dev/nvidia0
&lt;span class="kw"&gt;crw-rw----&lt;/span&gt; 1 root video 195, 255 Jul 13 15:56 /dev/nvidiactl&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case &lt;code&gt;/dev/nvidia0&lt;/code&gt; and &lt;code&gt;/dev/nvidiactl&lt;/code&gt; are not present, ensure the kernel module &lt;code&gt;nvidia&lt;/code&gt; is automatically loaded, properly configured and optimized, and there is a &lt;em&gt;udev&lt;/em&gt; rule to create the devices:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;echo&lt;/span&gt; &lt;span class="st"&gt;'nvidia'&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/modules-load.d/nvidia.conf
$ &lt;span class="kw"&gt;cat&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/udev/rules.d/70-nvidia.rules &amp;lt;&amp;lt; __EOF__
KERNEL=="nvidia", RUN+="/bin/ba&lt;span class="kw"&gt;sh&lt;/span&gt; -c &lt;span class="st"&gt;'/usr/bin/nvidia-smi -L &amp;amp;&amp;amp; /bin/chmod 0660 /dev/nvidia* &amp;amp;&amp;amp; /bin/chgrp video /dev/nvidia*'"&lt;/span&gt;
&lt;span class="st"&gt;__EOF__&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For &lt;em&gt;OpenCL&lt;/em&gt; support the devices &lt;code&gt;/dev/nvidia-uvm&lt;/code&gt; and &lt;code&gt;/dev/nvidia-uvm-tools&lt;/code&gt; are needed. Ensure the kernel module &lt;code&gt;nvidia-uvm&lt;/code&gt; is automatically loaded, and add a custom &lt;em&gt;udev&lt;/em&gt; rule to create the device:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;echo&lt;/span&gt; &lt;span class="st"&gt;'nvidia-uvm'&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/modules-load.d/nvidia-uvm.conf
$ &lt;span class="kw"&gt;cat&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/udev/rules.d/70-nvidia-uvm.rules &amp;lt;&amp;lt; __EOF__
KERNEL=="nvidia_uvm", RUN+="/bin/ba&lt;span class="kw"&gt;sh&lt;/span&gt; -c &lt;span class="st"&gt;'/usr/bin/nvidia-modprobe -c0 -u &amp;amp;&amp;amp; /bin/chmod 0660 /dev/nvidia-uvm* &amp;amp;&amp;amp; /bin/chgrp video /dev/nvidia-uvm*'"&lt;/span&gt;
&lt;span class="st"&gt;__EOF__&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you would like to monitor real-time temperatures on your host system use something like:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;watch&lt;/span&gt; -n 5 &lt;span class="st"&gt;'nvidia-smi; echo; sensors; for hdd in /dev/sd?; do echo -n "$hdd  "; smartctl -A $hdd | grep Temperature_Celsius; done'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case your Nvidia kernel driver and CUDA library versions differ an error appears in kernel messages (&lt;code&gt;dmesg&lt;/code&gt;) or using &lt;code&gt;nvidia-smi&lt;/code&gt; inside the container. Possible solutions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;upgrade your Nvidia kernel driver on the host directly from &lt;em&gt;Debian 9&lt;/em&gt; packages: &lt;a href="https://packages.debian.org/stretch/amd64/nvidia-kernel-dkms"&gt;nvidia-kernel-dkms&lt;/a&gt;, &lt;a href="https://packages.debian.org/stretch/amd64/nvidia-alternative"&gt;nvidia-alternative&lt;/a&gt;, &lt;a href="https://packages.debian.org/stretch/amd64/libnvidia-ml1"&gt;libnvidia-ml1&lt;/a&gt;, &lt;a href="https://packages.debian.org/stretch/amd64/nvidia-smi"&gt;nvidia-smi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;upgrade your Nvidia kernel driver on the host by compiling it yourself&lt;/li&gt;
&lt;li&gt;inject the correct version of CUDA library into the container (if it is installed on the host) with:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /dev/nvidia* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'--device={}'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /usr/lib/x86_64-linux-gnu/libcuda.* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'-v {}:{}:ro'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; gw000/debian-cuda&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/docker-debian-cuda/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/gw0/docker-debian-cuda/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2016 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#54;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2016 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
</summary><category term="docker"></category><category term="debian"></category><category term="image"></category></entry><entry><title>Docker keras-full</title><link href="http://gw.tnode.com/docker/keras-full/" rel="alternate"></link><updated>2017-01-19T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-06-16:docker/keras-full/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-keras-full&lt;/em&gt;&lt;/strong&gt; is a &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image built from &lt;em&gt;Debian 9&lt;/em&gt; (amd64) with a full reproducible deep learning research environment based on &lt;a href="http://keras.io/"&gt;&lt;em&gt;Keras&lt;/em&gt;&lt;/a&gt; and &lt;a href="http://jupyter.org/"&gt;&lt;em&gt;Jupyter&lt;/em&gt;&lt;/a&gt;. It supports CPU and GPU processing with &lt;a href="http://deeplearning.net/software/theano/"&gt;&lt;em&gt;Theano&lt;/em&gt;&lt;/a&gt; and &lt;a href="http://www.tensorflow.org/"&gt;&lt;em&gt;TensorFlow&lt;/em&gt;&lt;/a&gt; backends. It features &lt;em&gt;Jupyter Notebook&lt;/em&gt; with &lt;em&gt;Python 2 and 3&lt;/em&gt; support and uses only Debian and Python packages (no manual installations).&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/keras-full/"&gt;http://gw.tnode.com/docker/keras-full/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/docker-keras-full/"&gt;http://github.com/gw0/docker-keras-full/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;debian&lt;/em&gt;, &lt;em&gt;keras&lt;/em&gt;, &lt;em&gt;theano&lt;/em&gt;, &lt;em&gt;tensorflow&lt;/em&gt;, &lt;em&gt;openblas&lt;/em&gt;, &lt;em&gt;cuda toolkit&lt;/em&gt;, &lt;em&gt;python&lt;/em&gt;, &lt;em&gt;numpy&lt;/em&gt;, &lt;em&gt;h5py&lt;/em&gt;, &lt;em&gt;jupyter&lt;/em&gt;, &lt;em&gt;matplotlib&lt;/em&gt;, &lt;em&gt;pillow&lt;/em&gt;, &lt;em&gt;pandas&lt;/em&gt;, &lt;em&gt;scikit-learn&lt;/em&gt;, &lt;em&gt;statsmodels&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="http://hub.docker.com/r/gw000/keras-full/"&gt;http://hub.docker.com/r/gw000/keras-full/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Available tags:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;1.2.0&lt;/code&gt;, &lt;code&gt;latest&lt;/code&gt; [2016-12-21]: &lt;em&gt;Python 2.7/3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.12.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.0&lt;/code&gt; [2016-09-20]: &lt;em&gt;Python 2.7/3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.10.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.8&lt;/code&gt; [2016-08-28]: &lt;em&gt;Python 2.7/3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.8)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.6&lt;/code&gt; [2016-07-20]: &lt;em&gt;Python 2.7/3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.6)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.4&lt;/code&gt; [2016-06-16]: &lt;em&gt;Python 2.7/3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.4)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.8.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Quick experiment from console with IPython 2.7 or 3.5:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm gw000/keras-full ipython2
$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm gw000/keras-full ipython3&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To start the Jupyter IPython web interface on &lt;code&gt;http://&amp;lt;ip&amp;gt;:8888/&lt;/code&gt; (password: &lt;code&gt;keras&lt;/code&gt;) and notebooks stored in &lt;code&gt;/srv/notebooks&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d -p=6006:6006 -p=8888:8888 -v=/srv/notebooks:/srv gw000/keras-full&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To utilize your GPUs this Docker image needs access to your &lt;code&gt;/dev/nvidia*&lt;/code&gt; devices (see &lt;a href="http://gw.tnode.com/docker/debian-cuda/"&gt;docker-debian-cuda&lt;/a&gt;), like:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /dev/nvidia* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'--device={}'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; -p=6006:6006 -p=8888:8888 -v=/srv/notebooks:/srv gw000/keras-full&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To change the default password, prepare &lt;a href="https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#preparing-a-hashed-password"&gt;a new hashed password&lt;/a&gt; and pass it as an environment variable:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d -p=6006:6006 -p=8888:8888 -e PASSWD=&lt;span class="st"&gt;"sha1:..."&lt;/span&gt; -v=/srv/notebooks:/srv gw000/keras-full&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/docker-keras-full/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/gw0/docker-keras-full/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2016 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#54;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2016 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
</summary><category term="docker"></category><category term="deep learning"></category><category term="image"></category></entry><entry><title>Docker keras</title><link href="http://gw.tnode.com/docker/keras/" rel="alternate"></link><updated>2017-01-26T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-04-12:docker/keras/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-keras&lt;/em&gt;&lt;/strong&gt; is a minimal &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image built from &lt;em&gt;Debian 9&lt;/em&gt; (amd64) for reproducible deep learning based on &lt;a href="http://keras.io/"&gt;&lt;em&gt;Keras&lt;/em&gt;&lt;/a&gt;. It features minimal images for &lt;em&gt;Python 2 or 3&lt;/em&gt;, &lt;a href="http://www.tensorflow.org/"&gt;&lt;em&gt;TensorFlow&lt;/em&gt;&lt;/a&gt; or &lt;a href="http://deeplearning.net/software/theano/"&gt;&lt;em&gt;Theano&lt;/em&gt;&lt;/a&gt; backends, processing on CPU or GPU, and uses only Debian and Python packages (no manual installations).&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/keras/"&gt;http://gw.tnode.com/docker/keras/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/docker-keras/"&gt;http://github.com/gw0/docker-keras/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;debian&lt;/em&gt;, &lt;em&gt;keras&lt;/em&gt;, &lt;em&gt;tensorflow&lt;/em&gt;, &lt;em&gt;theano&lt;/em&gt;, &lt;em&gt;openblas&lt;/em&gt;, &lt;em&gt;cuda toolkit&lt;/em&gt;, &lt;em&gt;python&lt;/em&gt;, &lt;em&gt;numpy&lt;/em&gt;, &lt;em&gt;h5py&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="https://hub.docker.com/r/gw000/keras/"&gt;https://hub.docker.com/r/gw000/keras/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Available tags:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py2&lt;/code&gt;, &lt;code&gt;1.2.1-cpu&lt;/code&gt;, &lt;code&gt;1.2.1&lt;/code&gt;, &lt;code&gt;latest&lt;/code&gt; points to &lt;code&gt;1.2.1-py2-tf-cpu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py3&lt;/code&gt; points to &lt;code&gt;1.2.1-py3-tf-cpu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-gpu&lt;/code&gt; points to &lt;code&gt;1.2.1-py2-tf-gpu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.2.1-py2-tf-gpu&lt;/code&gt; [2017-01-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.1)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.12.1)&lt;/small&gt; on CPU/GPU (&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py2-tf-cpu"&gt;&lt;em&gt;Dockerfile.py2-tf-cpu&lt;/em&gt;&lt;/a&gt;/&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py2-tf-gpu"&gt;&lt;em&gt;.py2-tf-gpu&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.2.1-py2-th-gpu&lt;/code&gt; [2017-01-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.1)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU (&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py2-th-cpu"&gt;&lt;em&gt;Dockerfile.py2-th-cpu&lt;/em&gt;&lt;/a&gt;/&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py2-th-gpu"&gt;&lt;em&gt;.py2-th-gpu&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.2.1-py3-tf-gpu&lt;/code&gt; [2017-01-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.1)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.12.1)&lt;/small&gt; on CPU/GPU (&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py3-tf-cpu"&gt;&lt;em&gt;Dockerfile.py3-tf-cpu&lt;/em&gt;&lt;/a&gt;/&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py3-tf-gpu"&gt;&lt;em&gt;.py3-tf-gpu&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.1-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.2.1-py3-th-gpu&lt;/code&gt; [2017-01-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.1)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU (&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py3-th-cpu"&gt;&lt;em&gt;Dockerfile.py3-th-cpu&lt;/em&gt;&lt;/a&gt;/&lt;a href="http://github.com/gw0/docker-keras/blob/master/Dockerfile.py3-th-gpu"&gt;&lt;em&gt;.py3-th-gpu&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.0-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.2.0-py2-tf-gpu&lt;/code&gt; [2016-12-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.12.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.0-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.2.0-py2-th-gpu&lt;/code&gt; [2016-12-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.0-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.2.0-py3-tf-gpu&lt;/code&gt; [2016-12-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.12.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.2.0-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.2.0-py3-th-gpu&lt;/code&gt; [2016-12-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.2.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.1-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.1.1-py2-tf-gpu&lt;/code&gt; [2016-10-31]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.1)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.10.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.1-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.1.1-py2-th-gpu&lt;/code&gt; [2016-10-31]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.1)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.1-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.1.1-py3-tf-gpu&lt;/code&gt; [2016-10-31]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.1)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.10.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.1-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.1.1-py3-th-gpu&lt;/code&gt; [2016-10-31]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.1)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.0-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.1.0-py2-tf-gpu&lt;/code&gt; [2016-09-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.10.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.0-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.1.0-py2-th-gpu&lt;/code&gt; [2016-09-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.0-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.1.0-py3-tf-gpu&lt;/code&gt; [2016-09-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.0)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.10.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.1.0-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.1.0-py3-th-gpu&lt;/code&gt; [2016-09-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.1.0)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.8-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.8-py2-tf-gpu&lt;/code&gt; [2016-08-28]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.8)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.8-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.8-py2-th-gpu&lt;/code&gt; [2016-08-28]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.8)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.8-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.8-py3-tf-gpu&lt;/code&gt; [2016-08-28]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.8)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.8-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.8-py3-th-gpu&lt;/code&gt; [2016-08-28]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.8)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.6-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.6-py2-tf-gpu&lt;/code&gt; [2016-07-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.6)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.6-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.6-py2-th-gpu&lt;/code&gt; [2016-07-20]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.6)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.6-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.6-py3-tf-gpu&lt;/code&gt; [2016-07-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.6)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.9.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.6-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.6-py3-th-gpu&lt;/code&gt; [2016-07-20]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.6)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.4-py2-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.4-py2-tf-gpu&lt;/code&gt; [2016-06-16]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.4)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.8.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.4-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.4-py2-th-gpu&lt;/code&gt; [2016-06-16]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.4)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.4-py3-tf-cpu&lt;/code&gt;/&lt;code&gt;1.0.4-py3-tf-gpu&lt;/code&gt; [2016-06-16]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.4)&lt;/small&gt; + &lt;em&gt;TensorFlow&lt;/em&gt; &lt;small&gt;(0.8.0)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.4-py3-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.4-py3-th-gpu&lt;/code&gt; [2016-06-16]: &lt;em&gt;Python 3.5&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.4)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.2)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1.0.1-py2-th-cpu&lt;/code&gt;/&lt;code&gt;1.0.1-py2-th-gpu&lt;/code&gt; [2016-04-16]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(1.0.1)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.1)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;li&gt;&lt;code&gt;0.3.3-py2-th-cpu&lt;/code&gt;/&lt;code&gt;0.3.3-py2-th-gpu&lt;/code&gt; [2016-03-31]: &lt;em&gt;Python 2.7&lt;/em&gt; + &lt;em&gt;Keras&lt;/em&gt; &lt;small&gt;(0.3.3)&lt;/small&gt; + &lt;em&gt;Theano&lt;/em&gt; &lt;small&gt;(0.8.1)&lt;/small&gt; on CPU/GPU&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Quick experiment with latest Keras (with TensorFlow backend on CPU) and your Python 2 code in &lt;code&gt;/srv/ai&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm -v /srv/ai:/srv/ai gw000/keras /srv/ai/run.py&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or using TensorFlow backend on GPUs (see &lt;a href="http://gw.tnode.com/docker/debian-cuda/"&gt;docker-debian-cuda&lt;/a&gt;) in Python 2:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /dev/nvidia* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'--device={}'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; -v /srv/ai:/srv/ai gw000/keras:1.2.0-py2-tf-gpu /srv/ai/run.py&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or using Theano backend on GPUs (see &lt;a href="http://gw.tnode.com/docker/debian-cuda/"&gt;docker-debian-cuda&lt;/a&gt;) in Python 3:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;ls&lt;/span&gt; /dev/nvidia* &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;xargs&lt;/span&gt; -I&lt;span class="dt"&gt;{}&lt;/span&gt; echo &lt;span class="st"&gt;'--device={}'&lt;/span&gt;&lt;span class="ot"&gt;)&lt;/span&gt; -v /srv/ai:/srv/ai gw000/keras:1.2.0-py3-th-gpu /srv/ai/run.py&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In practice you are supposed to extend this image by writing your own &lt;code&gt;Dockerfile&lt;/code&gt; that installs all your application dependencies (either using &lt;code&gt;apt-get&lt;/code&gt; or &lt;code&gt;pip&lt;/code&gt;). Eg. if you need Matplotlib, PIL/pillow, Pandas, Scikit-learn, and Statsmodels:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;FROM gw000/keras:1.2.0-py2-th-cpu

# install dependencies from debian packages
RUN apt-get update -qq \
 &amp;amp;&amp;amp; apt-get install --no-install-recommends -y \
    python-matplotlib \
    python-pillow

# install dependencies from python packages
RUN pip --no-cache-dir install \
    pandas \
    scikit-learn \
    statsmodels

# install your app
ADD ai/ /srv/ai/
RUN chmod +x /srv/ai/run.py

CMD ["/srv/ai/run.py"]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you are looking for a full deep learning research environment based on &lt;em&gt;Keras&lt;/em&gt; and &lt;em&gt;Jupyter&lt;/em&gt;, check out &lt;a href="http://gw.tnode.com/docker/keras-full/"&gt;docker-keras-full&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/docker-keras/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/gw0/docker-keras/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2016-2017 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#x65;&amp;#110;&amp;#x61;&amp;#46;&amp;#x6f;&amp;#110;&amp;#x65;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x37;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2017 at ena dot one&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
</summary><category term="docker"></category><category term="deep learning"></category><category term="image"></category></entry><entry><title>Docker periodic-sync</title><link href="http://gw.tnode.com/docker/periodic-rsync/" rel="alternate"></link><updated>2016-12-02T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-03-15:docker/periodic-rsync/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;docker-periodic-rsync&lt;/em&gt;&lt;/strong&gt; is a &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image based on &lt;em&gt;Debian 8&lt;/em&gt; with &lt;em&gt;cron&lt;/em&gt;, &lt;em&gt;ssh&lt;/em&gt; and &lt;a href="http://en.wikipedia.org/wiki/Rsync"&gt;&lt;em&gt;rsync&lt;/em&gt;&lt;/a&gt; for periodic or one-time remote &lt;em&gt;rsync&lt;/em&gt; copy jobs.&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/periodic-rsync/"&gt;http://gw.tnode.com/docker/periodic-rsync/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/docker-periodic-rsync/"&gt;http://github.com/gw0/docker-periodic-rsync/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;debian&lt;/em&gt;, &lt;em&gt;cron&lt;/em&gt;, &lt;em&gt;ssh&lt;/em&gt;, &lt;em&gt;rsync&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="https://hub.docker.com/r/gw000/periodic-rsync/"&gt;https://hub.docker.com/r/gw000/periodic-rsync/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;setup passwordless SSH login on remote machines (&lt;a href="http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/"&gt;setup&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/root/.ssh&lt;/code&gt;: mount your passwordless SSH public and private keys (&lt;code&gt;id_rsa&lt;/code&gt;/&lt;code&gt;id_rsa.pub&lt;/code&gt;, chown to user &lt;code&gt;root&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/data&lt;/code&gt;: mount preferred target directory&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/etc/crontab&lt;/code&gt;: mount your crontab file (for periodic usage)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For one-time usage (need specify command):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm -v /srv/backup/.ssh:/root/.ssh -v /srv/backup/data:/data gw000/periodic-rsync rsync -zave ssh user@server.remote:dir/ /data&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For periodic usage (prepare crontab file &lt;code&gt;/srv/backup/cron.d/backup&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# /etc/cron.d/backup: system-wide crontab
SHELL=/bin/sh

# m h dom mon dow user  command
*/5 *   *   *   * root  rsync -zave ssh user@server.remote:dir/ /data&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d -v /srv/backup/.ssh:/root/.ssh -v /srv/backup/cron.d:/etc/cron.d -v /srv/backup/data:/data --name backup gw000/periodic-rsync&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/docker-periodic-rsync/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/gw0/docker-periodic-rsync/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2016 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#54;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2016 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
</summary><category term="docker"></category><category term="backup"></category><category term="image"></category></entry><entry><title>Docker Compose organized naming convention</title><link href="http://gw.tnode.com/docker/docker-compose-organized/" rel="alternate"></link><updated>2016-03-10T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-03-07:docker/docker-compose-organized/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker Compose 1.6 logo" height="120" src="http://gw.tnode.com/docker/img/docker-compose-1x-logo.png" width="122"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="install-docker-compose"&gt;Install Docker Compose&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Docker Compose 1.6.2&lt;/em&gt; does not have a &lt;em&gt;Debian&lt;/em&gt; package, so instead of installing its &lt;em&gt;Python&lt;/em&gt; dependencies on the host, it can be run on-demand inside a &lt;em&gt;Docker&lt;/em&gt; container with a helper script.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; -O /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/1.6.2/run.sh
$ &lt;span class="kw"&gt;chmod&lt;/span&gt; +x /usr/local/bin/docker-compose&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/overview/"&gt;https://docs.docker.com/compose/overview/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/compose/releases"&gt;https://github.com/docker/compose/releases&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="organized-naming-convention"&gt;Organized naming convention&lt;/h2&gt;
&lt;p&gt;Instead of having &lt;em&gt;Docker&lt;/em&gt; containers inside &lt;code&gt;/var/lib/docker&lt;/code&gt;, deployment repositories and data volumes all over the place, the following naming convention has proven useful:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/srv/docker/   -- Docker containers (from /var/lib/docker/)
/srv/repos/    -- per-project deployment repositories
/srv/repos/project1/docker-compose.yaml
/srv/repos/project1/docker-db/Dockerfile
/srv/repos/project1/docker-web/Dockerfile
/srv/storage/  -- per-container data volumes mounted on host
/srv/storage/project1/db
/srv/storage/project1/web&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Initializing such a setup can be accomplished just by moving some files, making a symbolic link, and empty directories:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;systemctl&lt;/span&gt; stop docker
$ &lt;span class="kw"&gt;mv&lt;/span&gt; /var/lib/docker/ /srv/docker/ &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;ln&lt;/span&gt; -s /srv/docker/ /var/lib/docker
$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /srv/repos/ /srv/storage/
$ &lt;span class="kw"&gt;systemctl&lt;/span&gt; start docker&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Afterwards bringing up your infrastructure is simple as:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cd&lt;/span&gt; /srv/repos/project1
$ &lt;span class="kw"&gt;docker-compose&lt;/span&gt; up -d
$ &lt;span class="kw"&gt;docker-compose&lt;/span&gt; logs&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="project-development-lifecycle"&gt;Project development lifecycle&lt;/h2&gt;
&lt;p&gt;First prepare an empty &lt;em&gt;Git&lt;/em&gt; repository on your &lt;em&gt;Docker&lt;/em&gt; server:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="ot"&gt;NAME=&lt;/span&gt;project1
$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /srv/repos/&lt;span class="ot"&gt;$NAME&lt;/span&gt; &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;cd&lt;/span&gt; /srv/repos/&lt;span class="ot"&gt;$NAME&lt;/span&gt;
$ &lt;span class="kw"&gt;git&lt;/span&gt; init
$ &lt;span class="kw"&gt;git&lt;/span&gt; config receive.denycurrentbranch false
$ &lt;span class="kw"&gt;echo&lt;/span&gt; -e &lt;span class="st"&gt;'#!/bin/sh\ngit --work-tree=.. checkout -f'&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; ./.git/hooks/post-receive &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;chmod&lt;/span&gt; +x ./.git/hooks/post-receive&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Setup your project repository and code for deployment with &lt;em&gt;Docker Compose&lt;/em&gt; on your workstation:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="ot"&gt;NAME=&lt;/span&gt;project1 &lt;span class="ot"&gt;SERVER=&lt;/span&gt;foo
$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; ./&lt;span class="ot"&gt;$NAME&lt;/span&gt; &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;cd&lt;/span&gt; ./&lt;span class="ot"&gt;$NAME&lt;/span&gt;
$ &lt;span class="kw"&gt;git&lt;/span&gt; init
$ &lt;span class="kw"&gt;git&lt;/span&gt; remote add &lt;span class="ot"&gt;$SERVER&lt;/span&gt; ssh://&lt;span class="ot"&gt;$SERVER&lt;/span&gt;/srv/repos/own.tnode.com
$ &lt;span class="kw"&gt;git&lt;/span&gt; add *
$ &lt;span class="kw"&gt;git&lt;/span&gt; commit -m &lt;span class="st"&gt;'Initial commit for deploying.'&lt;/span&gt;
$ &lt;span class="kw"&gt;git&lt;/span&gt; push --set-upstream &lt;span class="ot"&gt;$SERVER&lt;/span&gt; master&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Afterwards bringing up your infrastructure as described above:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cd&lt;/span&gt; /srv/repos/project1
$ &lt;span class="kw"&gt;docker-compose&lt;/span&gt; up -d
$ &lt;span class="kw"&gt;docker-compose&lt;/span&gt; logs&lt;/code&gt;&lt;/pre&gt;
</summary><category term="docker"></category><category term="debian"></category><category term="install"></category><category term="usage"></category></entry><entry><title>Docker installation on Debian 8</title><link href="http://gw.tnode.com/docker/docker-installation-on-debian-8/" rel="alternate"></link><updated>2016-03-10T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-03-07:docker/docker-installation-on-debian-8/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="install-docker-engine"&gt;Install Docker Engine&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Docker Engine 1.10.2&lt;/em&gt; requires a 64-bit OS with at least kernel version 3.10. Official &lt;em&gt;Debian repositories&lt;/em&gt; are a few versions behind with Docker versions it is best to install it directly from Docker’s APT repository.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; install linux-image-4.3.0-0.bpo.1-amd64
$ &lt;span class="kw"&gt;vi&lt;/span&gt; /etc/default/grub
&lt;span class="ot"&gt;GRUB_CMDLINE_LINUX=&lt;/span&gt;&lt;span class="st"&gt;"cgroup_enable=memory swapaccount=1"&lt;/span&gt;
$ &lt;span class="kw"&gt;update-grub&lt;/span&gt;
$ &lt;span class="kw"&gt;reboot&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;apt-key&lt;/span&gt; adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ &lt;span class="kw"&gt;echo&lt;/span&gt; &lt;span class="st"&gt;'deb http://apt.dockerproject.org/repo debian-jessie main'&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/docker.list
$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; update
$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; install docker-engine&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="journald-logging-driver"&gt;Journald logging driver&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Debian 8&lt;/em&gt; switched to &lt;em&gt;systemd&lt;/em&gt; as its default system and service manager, so it would make sense to also store log messages from containers in &lt;em&gt;systemd journal&lt;/em&gt; (&lt;code&gt;journald&lt;/code&gt; logging driver). Also note that &lt;em&gt;Docker&lt;/em&gt; by default logs messages to a JSON file (&lt;code&gt;json-file&lt;/code&gt; logging driver) that can get corrupted in some situations.&lt;/p&gt;
&lt;p&gt;Logging driver can be configured by passing the &lt;code&gt;--log-driver=journald&lt;/code&gt; option to the &lt;em&gt;Docker&lt;/em&gt; daemon. Afterwards log messages can be retrieved with:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;journalctl&lt;/span&gt; CONTAINER_NAME=web&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="overlay-storage-driver"&gt;Overlay storage driver&lt;/h3&gt;
&lt;p&gt;The default &lt;code&gt;aufs&lt;/code&gt; storage driver is production-ready and stable, but it is not efficient under high write activity and not included in mainline &lt;em&gt;Linux&lt;/em&gt; kernel. Recently &lt;code&gt;overlay&lt;/code&gt; storage driver is gaining popularity and available in kernel since 3.18 (&lt;code&gt;apt-get install linux-image-4.3.0-0.bpo.1-amd64&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Storage driver can be enforced by passing the &lt;code&gt;--storage-driver=overlay&lt;/code&gt; option to the &lt;em&gt;Docker&lt;/em&gt; daemon.&lt;/p&gt;
&lt;h3 id="configure-systemd-unit"&gt;Configure systemd unit&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Docker&lt;/em&gt; daemon needs to be configured to start with above options as a &lt;em&gt;systemd&lt;/em&gt; service:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /etc/systemd/system/docker.service.d/
$ &lt;span class="kw"&gt;cat&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/systemd/system/docker.service.d/10-execstart.conf &amp;lt;&amp;lt; __EOF__
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H &lt;span class="kw"&gt;fd&lt;/span&gt;:// --storage-driver=overlay --icc=false --iptables=true
__EOF__
$ systemctl daemon-reload &amp;amp;&amp;amp; systemctl restart docker&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check if &lt;em&gt;Docker&lt;/em&gt; daemon is set up correctly:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; info
&lt;span class="kw"&gt;Containers&lt;/span&gt;: 0
 &lt;span class="kw"&gt;Running&lt;/span&gt;: 0
 &lt;span class="kw"&gt;Paused&lt;/span&gt;: 0
 &lt;span class="kw"&gt;Stopped&lt;/span&gt;: 0
&lt;span class="kw"&gt;Images&lt;/span&gt;: 0
&lt;span class="kw"&gt;Server&lt;/span&gt; Version: 1.10.2
&lt;span class="kw"&gt;Storage&lt;/span&gt; Driver: overlay
 &lt;span class="kw"&gt;Backing&lt;/span&gt; Filesystem: extfs
&lt;span class="kw"&gt;Execution&lt;/span&gt; Driver: native-0.2
&lt;span class="kw"&gt;Logging&lt;/span&gt; Driver: journald
&lt;span class="kw"&gt;Plugins&lt;/span&gt;: 
 &lt;span class="kw"&gt;Volume&lt;/span&gt;: local
 &lt;span class="kw"&gt;Network&lt;/span&gt;: host bridge null
&lt;span class="kw"&gt;Kernel&lt;/span&gt; Version: 4.3.0-0.bpo.1-amd64
&lt;span class="kw"&gt;Operating&lt;/span&gt; System: Debian GNU/Linux 8 (jessie)
&lt;span class="kw"&gt;OSType&lt;/span&gt;: linux
&lt;span class="kw"&gt;Architecture&lt;/span&gt;: x86_64
&lt;span class="kw"&gt;CPUs&lt;/span&gt;: 8
&lt;span class="kw"&gt;Total&lt;/span&gt; Memory: 7.697 GiB
&lt;span class="kw"&gt;Name&lt;/span&gt;: m1
&lt;span class="kw"&gt;ID&lt;/span&gt;: AAAA:BBBB:AAAA:BBBB:AAAA:BBBB:AAAA:BBBB:AAAA:BBBB:AAAA:BBBB
$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm debian /bin/echo &lt;span class="st"&gt;"WOOHOOO"&lt;/span&gt;
&lt;span class="kw"&gt;Unable&lt;/span&gt; to find image &lt;span class="st"&gt;'debian:latest'&lt;/span&gt; locally
&lt;span class="kw"&gt;latest&lt;/span&gt;: Pulling from library/debian
&lt;span class="kw"&gt;fdd5d7827f33&lt;/span&gt;: Pull complete 
&lt;span class="kw"&gt;a3ed95caeb02&lt;/span&gt;: Pull complete 
&lt;span class="kw"&gt;Digest&lt;/span&gt;: sha256:e7d38b3517548a1c71e41bffe9c8ae6d6d29546ce46bf62159837aad072c90aa
&lt;span class="kw"&gt;Status&lt;/span&gt;: Downloaded newer image for debian:latest
&lt;span class="kw"&gt;WOOHOOO&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="continue"&gt;Continue&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://gw.tnode.com/docker/docker-compose-organized/"&gt;Docker Compose organized naming convention&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://gw.tnode.com/docker/weave-network-driver-on-debian-8/"&gt;Weave network driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://gw.tnode.com/docker/multiple-docker-hosts-with-socat-tunnels/"&gt;Multiple Docker hosts with socat tunnels&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/quickstart/"&gt;https://docs.docker.com/engine/quickstart/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/installation/linux/debian/"&gt;https://docs.docker.com/engine/installation/linux/debian/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/security/security/"&gt;https://docs.docker.com/engine/security/security/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/admin/logging/journald/"&gt;https://docs.docker.com/engine/admin/logging/journald/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/"&gt;https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/admin/systemd/"&gt;https://docs.docker.com/engine/admin/systemd/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="docker"></category><category term="debian"></category><category term="install"></category></entry><entry><title>Multiple Docker hosts with socat tunnels</title><link href="http://gw.tnode.com/docker/multiple-docker-hosts-with-socat-tunnels/" rel="alternate"></link><updated>2016-03-10T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-03-07:docker/multiple-docker-hosts-with-socat-tunnels/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;Managing multiple &lt;em&gt;Docker&lt;/em&gt; hosts can be done remotely and on-demand with &lt;code&gt;socat&lt;/code&gt; tunnels. There is no need for deploying &lt;em&gt;Docker Swarm&lt;/em&gt;, reconfiguring &lt;em&gt;Docker&lt;/em&gt; daemon, or exposing its port with a proxy.&lt;/p&gt;
&lt;h2 id="socat-tunnels-to-docker-hosts"&gt;Socat tunnels to Docker hosts&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Docker&lt;/em&gt; hosts can be administered either individually or through &lt;em&gt;Docker Swarm&lt;/em&gt;. Because &lt;em&gt;Docker&lt;/em&gt; daemon does not listen on a network interface by default, a workaround is needed to connect to it remotely.&lt;/p&gt;
&lt;p&gt;On all &lt;em&gt;Docker&lt;/em&gt; hosts install the &lt;code&gt;socat&lt;/code&gt; utility and setup password-less authentication over SSH. On your workstation also install the &lt;code&gt;socat&lt;/code&gt; utility and &lt;code&gt;docker&lt;/code&gt; command. When needed setup the tunnels with a command like:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;socat&lt;/span&gt; TCP-LISTEN:2350,bind=127.0.0.1,reuseaddr,fork,range=127.0.0.0/8 EXEC:&lt;span class="st"&gt;"ssh root@1.2.3.50 socat STDIO UNIX-CONNECT\:/run/docker.sock"&lt;/span&gt;
$ &lt;span class="kw"&gt;for&lt;/span&gt; &lt;span class="kw"&gt;d&lt;/span&gt; in 50 51 52&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="kw"&gt;(socat&lt;/span&gt; TCP-LISTEN:23&lt;span class="ot"&gt;$d&lt;/span&gt;,bind=127.0.0.1,reuseaddr,fork,range=127.0.0.0/8 EXEC:&lt;span class="st"&gt;"ssh root@1.2.3.&lt;/span&gt;&lt;span class="ot"&gt;$d&lt;/span&gt;&lt;span class="st"&gt; socat STDIO UNIX-CONNECT\:/run/docker.sock"&lt;/span&gt; &lt;span class="kw"&gt;&amp;amp;)&lt;/span&gt;; &lt;span class="kw"&gt;done&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Afterwards you may control your &lt;em&gt;Docker&lt;/em&gt; hosts from the workstation simply by adding something like &lt;code&gt;-H 127.0.0.1:2350&lt;/code&gt; to the command:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -i lo -p tcp -j ACCEPT
$ &lt;span class="kw"&gt;docker&lt;/span&gt; -H 127.0.0.1:2352 ps -a&lt;/code&gt;&lt;/pre&gt;
</summary><category term="docker"></category><category term="usage"></category></entry><entry><title>Weave network driver on Debian 8</title><link href="http://gw.tnode.com/docker/weave-network-driver-on-debian-8/" rel="alternate"></link><updated>2016-07-12T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2016-03-07:docker/weave-network-driver-on-debian-8/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Weaveworks company logo" height="71" src="http://gw.tnode.com/docker/img/weaveworks-logo.png" width="400"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="weave-network-driver"&gt;Weave network driver&lt;/h2&gt;
&lt;p&gt;Built-in network drivers are great for local communication inside a single host or exposing ports. Unfortunately the built-in multi-host networking support (&lt;code&gt;overlay&lt;/code&gt; network driver) is based on VXLAN, which is not encrypted and needs full connectivity between hosts. If you do not have a secure trusted LAN between all hosts, the &lt;code&gt;weave&lt;/code&gt; network driver provides a great flexible and secure alternative. Just like &lt;em&gt;Docker&lt;/em&gt; daemon, &lt;em&gt;Weave&lt;/em&gt; also provides an embedded DNS server for automatic service discovery of containers.&lt;/p&gt;
&lt;p&gt;A &lt;em&gt;Weave 1.4.5&lt;/em&gt; network consists of several &lt;em&gt;Docker&lt;/em&gt; containers that are managed through a helper script and can be assigned to containers on demand. To build a &lt;em&gt;Weave&lt;/em&gt; network we supply addresses of other hosts and the network will automatically (re)connect to peers when they become available.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; -O /usr/local/bin/weave https://git.io/weave
$ &lt;span class="kw"&gt;chmod&lt;/span&gt; +x /usr/local/bin/weave&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case you have a strict firewall DROP policy, you must permit loopback traffic from the &lt;code&gt;weave&lt;/code&gt; script (TCP 6784, UDP 53), inter-peer traffic to the &lt;em&gt;Weave&lt;/em&gt; control port (TCP 6783), sleeve and fastdp data ports (UDP 6783/6784):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -i lo -p tcp --dport=6784 -j ACCEPT
$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -i lo -p udp --dport=53 -j ACCEPT
$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -p tcp --dport=6783 -j ACCEPT
$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -p udp --dport=6783 -j ACCEPT
$ &lt;span class="kw"&gt;iptables&lt;/span&gt; -A INPUT -p udp --dport=6784 -j ACCEPT
$ &lt;span class="kw"&gt;iptables-save&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/iptables/rules.v4&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check if manually starting the &lt;em&gt;Weave&lt;/em&gt; network and adding configuration parameters to the &lt;code&gt;docker&lt;/code&gt; command, works:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;weave&lt;/span&gt; launch
$ &lt;span class="kw"&gt;weave&lt;/span&gt; status
$ &lt;span class="kw"&gt;docker&lt;/span&gt; &lt;span class="ot"&gt;$(&lt;/span&gt;&lt;span class="kw"&gt;weave&lt;/span&gt; config&lt;span class="ot"&gt;)&lt;/span&gt; run -it --rm debian /bin/ip addr show ethwe
&lt;span class="kw"&gt;38&lt;/span&gt;: ethwe@if39: &lt;span class="kw"&gt;&amp;lt;&lt;/span&gt;BROADCAST,MULTICAST,UP,LOWER_UP&lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    &lt;span class="kw"&gt;link/ether&lt;/span&gt; ea:c7:b6:f3:75:da brd ff:ff:ff:ff:ff:ff
    &lt;span class="kw"&gt;inet&lt;/span&gt; 10.32.0.1/12 scope global ethwe
       &lt;span class="kw"&gt;valid_lft&lt;/span&gt; forever preferred_lft forever
    &lt;span class="kw"&gt;inet6&lt;/span&gt; fe80::e8c7:b6ff:fef3:75da/64 scope link tentative 
       &lt;span class="kw"&gt;valid_lft&lt;/span&gt; forever preferred_lft forever
$ &lt;span class="kw"&gt;weave&lt;/span&gt; reset&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Otherwise you may want to create &lt;em&gt;Weave&lt;/em&gt; networks on demand (with &lt;code&gt;docker network create --driver=weave mynet&lt;/code&gt;) and join containers to them as usual (&lt;code&gt;--net=mynet&lt;/code&gt;).&lt;/p&gt;
&lt;h3 id="configure-systemd-service"&gt;Configure systemd service&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Weave&lt;/em&gt; network also needs to be configure to start on boot and optionally expose host IP as a &lt;em&gt;systemd&lt;/em&gt; service unit:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cat&lt;/span&gt; &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /etc/default/weave &amp;lt;&amp;lt; __EOF__
CHECKPOINT_DISABLE=tru&lt;span class="kw"&gt;e&lt;/span&gt;
CONNLIMIT=100
WEAVE_NO_FASTDP=tru&lt;span class="kw"&gt;e&lt;/span&gt;
WEAVE_PASSWORD="wfvAwt7sj"
PEERS="1.2.3.4"
__EOF__
$ chmod 600 /&lt;span class="kw"&gt;etc/default/weave&lt;/span&gt;
$ cat &amp;gt; /&lt;span class="kw"&gt;etc/systemd/system/weave.service&lt;/span&gt; &amp;lt;&amp;lt; __EOF__
[Unit]
D&lt;span class="kw"&gt;escription&lt;/span&gt;=Weave Network
Docume&lt;span class="kw"&gt;ntation&lt;/span&gt;=http://docs.weave.works/weave/latest_release/
Requires=docker.service
After=docker.service

[Service]
EnvironmentFile=-/etc/default/weave
ExecStartPre=/usr/local/bin/weave launch --no-restart --connlimit &lt;span class="dt"&gt;\$&lt;/span&gt;CONNLIMIT &lt;span class="dt"&gt;\$&lt;/span&gt;PEERS
ExecStart=/usr/bin/docker attach weave
ExecStartPost=/bin/bash -c '/usr/local/bin/weave expose -h &lt;span class="dt"&gt;\$&lt;/span&gt;(hostname -s).weave.local'
ExecStop=/usr/local/bin/weave stop

[Install]
WantedBy=multi-user.target
__EOF__
$ systemctl daemon-reload &amp;amp;&amp;amp; systemctl enable weave &amp;amp;&amp;amp; systemctl restart weave&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check if &lt;em&gt;Weave&lt;/em&gt; network is set up correctly:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;weave&lt;/span&gt; status

        &lt;span class="kw"&gt;Version&lt;/span&gt;: 1.4.5

        &lt;span class="kw"&gt;Service&lt;/span&gt;: router
       &lt;span class="kw"&gt;Protocol&lt;/span&gt;: weave 1..2
           &lt;span class="kw"&gt;Name&lt;/span&gt;: aa:bb:f1:e5:98:a2(foo)
     &lt;span class="kw"&gt;Encryption&lt;/span&gt;: enabled
  &lt;span class="kw"&gt;PeerDiscovery&lt;/span&gt;: enabled
        &lt;span class="kw"&gt;Targets&lt;/span&gt;: 2
    &lt;span class="kw"&gt;Connections&lt;/span&gt;: 3 (2 established, 1 retrying)
          &lt;span class="kw"&gt;Peers&lt;/span&gt;: 3 (with 6 established connections)
 &lt;span class="kw"&gt;TrustedSubnets&lt;/span&gt;: none

        &lt;span class="kw"&gt;Service&lt;/span&gt;: ipam
         &lt;span class="kw"&gt;Status&lt;/span&gt;: ready
          &lt;span class="kw"&gt;Range&lt;/span&gt;: 10.32.0.0-10.47.255.255
  &lt;span class="kw"&gt;DefaultSubnet&lt;/span&gt;: 10.32.0.0/12

        &lt;span class="kw"&gt;Service&lt;/span&gt;: dns
         &lt;span class="kw"&gt;Domain&lt;/span&gt;: weave.local.
       &lt;span class="kw"&gt;Upstream&lt;/span&gt;: 1.2.3.4
            &lt;span class="kw"&gt;TTL&lt;/span&gt;: 1
        &lt;span class="kw"&gt;Entries&lt;/span&gt;: 0

        &lt;span class="kw"&gt;Service&lt;/span&gt;: proxy
        &lt;span class="kw"&gt;Address&lt;/span&gt;: unix:///var/run/weave/weave.sock

        &lt;span class="kw"&gt;Service&lt;/span&gt;: plugin
     &lt;span class="kw"&gt;DriverName&lt;/span&gt;: weave
$ &lt;span class="kw"&gt;weave&lt;/span&gt; status dns
&lt;span class="kw"&gt;docker-vm&lt;/span&gt;    10.45.0.0       weave:expose 82:a1:66:22:11:00&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://docs.weave.works/weave/latest_release/"&gt;http://docs.weave.works/weave/latest_release/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.weave.works/guides/networking-docker-containers-with-weave-on-ubuntu/"&gt;https://www.weave.works/guides/networking-docker-containers-with-weave-on-ubuntu/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/weaveworks/weave/blob/master/site/systemd.md"&gt;https://github.com/weaveworks/weave/blob/master/site/systemd.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.weave.works/documentation/net-latest-installing-weave/net-latest-systemd/"&gt;https://www.weave.works/documentation/net-latest-installing-weave/net-latest-systemd/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="docker"></category><category term="debian"></category><category term="install"></category></entry><entry><title>Simulation rs-skip-gram-in-myhdl</title><link href="http://gw.tnode.com/student/rs-skip-gram-in-myhdl/" rel="alternate"></link><updated>2015-10-08T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-08-18:student/rs-skip-gram-in-myhdl/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Word embeddings in NLP" height="120" src="http://gw.tnode.com/student/rs-skip-gram-in-myhdl/img/word-embeddings.png" width="636"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;Simulation &lt;a href="http://gw.tnode.com/student/rs-skip-gram-in-myhdl/"&gt;&lt;strong&gt;&lt;em&gt;rs-skip-gram-in-myhdl&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; implements the &lt;strong&gt;skip-gram model with negative sampling (SGNS)&lt;/strong&gt; in &lt;a href="http://www.myhdl.org/"&gt;&lt;em&gt;MyHDL&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Computing continuous distributed vector representations of words, also called word embeddings, is becoming increasingly important in natural language processing (NLP). T. Mikolov et al. (2013) introduced the skip-gram model for learning meaningful word embeddings in their &lt;em&gt;word2vec&lt;/em&gt; tool. The model takes any text corpus as input, processes pairs of words according to an unsupervised language model, and learns the weights in a custom neural network layer (word embeddings).&lt;/p&gt;
&lt;p&gt;There have already been a few attempts at implementing classic neural networks with backpropagation in Verilog or VHDL, but none for word embeddings or in MyHDL that turns Python into a hardware description and verification language.&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/student/rs-skip-gram-in-myhdl/"&gt;http://gw.tnode.com/student/rs-skip-gram-in-myhdl/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/rs-skip-gram-in-myhdl/"&gt;http://github.com/gw0/rs-skip-gram-in-myhdl/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;Python&lt;/em&gt;, &lt;em&gt;MyHDL&lt;/em&gt; library&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Python&lt;/em&gt; &lt;small&gt;(2.7)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;python-virtualenv&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;auto-installed &lt;em&gt;NumPy&lt;/em&gt; &lt;small&gt;(1.8.2)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;auto-installed &lt;em&gt;SciPy&lt;/em&gt; &lt;small&gt;(0.14.0)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;auto-installed &lt;em&gt;MyHDL&lt;/em&gt; with &lt;code&gt;fixbv&lt;/code&gt; type &lt;small&gt;(on &lt;a href="https://github.com/gw0/myhdl/tree/mep111_fixbv"&gt;Github&lt;/a&gt; branch &lt;code&gt;mep111_fixbv&lt;/code&gt;)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Installation on &lt;em&gt;Debian&lt;/em&gt;/&lt;em&gt;Ubuntu&lt;/em&gt; using &lt;code&gt;virtualenv&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; install python python-virtualenv
$ &lt;span class="kw"&gt;git&lt;/span&gt; clone http://github.com/gw0/rs-skip-gram-in-myhdl.git
$ &lt;span class="kw"&gt;cd&lt;/span&gt; ./rs-skip-gram-in-myhdl
$ &lt;span class="kw"&gt;./requirements.sh&lt;/span&gt;
&lt;span class="kw"&gt;...&lt;/span&gt;
$ &lt;span class="kw"&gt;.&lt;/span&gt; &lt;span class="kw"&gt;venv/bin/activate&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Prepare dataset (example for &lt;code&gt;enwik8-clean.zip&lt;/code&gt;):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cd&lt;/span&gt; ./data
$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://cs.fit.edu/~mmahoney/compression/enwik8.zip
$ &lt;span class="kw"&gt;unzip&lt;/span&gt; enwik8.zip
$ &lt;span class="kw"&gt;./clean-wikifil.pl&lt;/span&gt; enwik8 &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; enwik8-clean
$ &lt;span class="kw"&gt;zip&lt;/span&gt; enwik8-clean.zip enwik8-clean
$ &lt;span class="kw"&gt;rm&lt;/span&gt; enwik8.zip enwik8 enwik8-clean
$ &lt;span class="kw"&gt;cd&lt;/span&gt; ..&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Execute project (experiment &lt;code&gt;ex01&lt;/code&gt; on dataset &lt;code&gt;data/enwik8-clean.zip&lt;/code&gt;):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;./project.py&lt;/span&gt; ex01 data/enwik8-clean.zip&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="implementation"&gt;Implementation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;using &lt;em&gt;Python&lt;/em&gt; for reading input data&lt;/li&gt;
&lt;li&gt;using &lt;a href="http://www.myhdl.org/"&gt;&lt;em&gt;MyHDL&lt;/em&gt;&lt;/a&gt; for learning&lt;/li&gt;
&lt;li&gt;packing a list of signals to a shadow vector&lt;/li&gt;
&lt;li&gt;unpacking a vector to a list of shadow signals&lt;/li&gt;
&lt;li&gt;fixed-point numbers (experimental &lt;code&gt;fixbv&lt;/code&gt; type, on &lt;a href="https://github.com/gw0/myhdl/tree/mep111_fixbv"&gt;Github&lt;/a&gt; branch &lt;code&gt;mep111_fixbv&lt;/code&gt;)
&lt;ul&gt;
&lt;li&gt;minimal number: &lt;em&gt;-2^7&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;maximal number: &lt;em&gt;2^7&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;resolution: &lt;em&gt;2^-8&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;total bits: &lt;em&gt;16&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;skip-gram model
&lt;ul&gt;
&lt;li&gt;with negative sampling with ratio &lt;em&gt;1:1&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;word embedding vector size: &lt;em&gt;3&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;ReLU activation function with leaky factor: &lt;em&gt;0.01&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;constant learning rate: &lt;em&gt;0.1&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;initial word embedding spread: &lt;em&gt;0.1&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;exponential moving average of mean square error with factor: &lt;em&gt;0.01&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;project.py&lt;/strong&gt; - Main code for preparing real input data and passing it to training stimulus.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;train.py&lt;/strong&gt; - Training stimulus of skip-gram model with negative sampling (SGNS).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RamSim.py&lt;/strong&gt; - Simulated RAM model using a Python dictionary.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rectifier.py&lt;/strong&gt; - Rectified linear unit (ReLU) activation function model using &lt;code&gt;fixbv&lt;/code&gt; type.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DotProduct.py&lt;/strong&gt; - Vector dot product model using &lt;code&gt;fixbv&lt;/code&gt; type.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;WordContextProduct.py&lt;/strong&gt; - Word-context embeddings product model needed for skip-gram training.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;WordContextUpdated.py&lt;/strong&gt; - Word-context embeddings updated model needed for skip-gram training.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="testing-components"&gt;Testing components&lt;/h3&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; RamSim.py
 &lt;span class="kw"&gt;10&lt;/span&gt; write, addr: 0, din: 0
 &lt;span class="kw"&gt;20&lt;/span&gt; write, addr: 1, din: 2
 &lt;span class="kw"&gt;30&lt;/span&gt; write, addr: 2, din: 4
 &lt;span class="kw"&gt;40&lt;/span&gt; write, addr: 3, din: 6
 &lt;span class="kw"&gt;50&lt;/span&gt; write, addr: 4, din: 8
 &lt;span class="kw"&gt;60&lt;/span&gt; read, addr: 0, dout: 0
 &lt;span class="kw"&gt;70&lt;/span&gt; read, addr: 1, dout: 2
 &lt;span class="kw"&gt;80&lt;/span&gt; read, addr: 2, dout: 4
 &lt;span class="kw"&gt;90&lt;/span&gt; read, addr: 3, dout: 6
&lt;span class="kw"&gt;100&lt;/span&gt; read, addr: 4, dout: 8&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; Rectifier.py
 &lt;span class="kw"&gt;20&lt;/span&gt; x: -2.500000, y: -0.031250, y_dx: 0.011719
 &lt;span class="kw"&gt;30&lt;/span&gt; x: -2.000000, y: -0.023438, y_dx: 0.011719
 &lt;span class="kw"&gt;40&lt;/span&gt; x: -1.500000, y: -0.019531, y_dx: 0.011719
 &lt;span class="kw"&gt;50&lt;/span&gt; x: -1.000000, y: -0.011719, y_dx: 0.011719
 &lt;span class="kw"&gt;60&lt;/span&gt; x: -0.500000, y: -0.007812, y_dx: 0.011719
 &lt;span class="kw"&gt;70&lt;/span&gt; x: 0.000000, y: 0.000000, y_dx: 0.011719
 &lt;span class="kw"&gt;80&lt;/span&gt; x: 0.500000, y: 0.500000, y_dx: 1.000000
 &lt;span class="kw"&gt;90&lt;/span&gt; x: 1.000000, y: 1.000000, y_dx: 1.000000
&lt;span class="kw"&gt;100&lt;/span&gt; x: 1.500000, y: 1.500000, y_dx: 1.000000
&lt;span class="kw"&gt;110&lt;/span&gt; x: 2.000000, y: 2.000000, y_dx: 1.000000&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; DotProduct.py
 &lt;span class="kw"&gt;20&lt;/span&gt; a_list: [-2.0, 0.0, 0.0], b_list: [0.0, 0.0, 0.0], y: 0.000000, y_da: [0.0, 0.0, 0.0], y_db: [-2.0, 0.0, 0.0]
 &lt;span class="kw"&gt;30&lt;/span&gt; a_list: [-1.5, 0.0, 0.0], b_list: [0.5, 0.0, 0.0], y: -0.750000, y_da: [0.5, 0.0, 0.0], y_db: [-1.5, 0.0, 0.0]
 &lt;span class="kw"&gt;40&lt;/span&gt; a_list: [-1.0, 0.0, 0.0], b_list: [1.0, 0.0, 0.0], y: -1.000000, y_da: [1.0, 0.0, 0.0], y_db: [-1.0, 0.0, 0.0]
 &lt;span class="kw"&gt;50&lt;/span&gt; a_list: [-0.5, 0.0, 0.0], b_list: [1.5, 0.0, 0.0], y: -0.750000, y_da: [1.5, 0.0, 0.0], y_db: [-0.5, 0.0, 0.0]
 &lt;span class="kw"&gt;60&lt;/span&gt; a_list: [0.0, 0.0, 0.0], b_list: [2.0, 0.0, 0.0], y: 0.000000, y_da: [2.0, 0.0, 0.0], y_db: [0.0, 0.0, 0.0]
 &lt;span class="kw"&gt;70&lt;/span&gt; a_list: [0.5, 0.0, 0.0], b_list: [2.5, 0.0, 0.0], y: 1.250000, y_da: [2.5, 0.0, 0.0], y_db: [0.5, 0.0, 0.0]
 &lt;span class="kw"&gt;80&lt;/span&gt; a_list: [1.0, 0.0, 0.0], b_list: [3.0, 0.0, 0.0], y: 3.000000, y_da: [3.0, 0.0, 0.0], y_db: [1.0, 0.0, 0.0]
 &lt;span class="kw"&gt;90&lt;/span&gt; a_list: [1.5, 0.0, 0.0], b_list: [3.5, 0.0, 0.0], y: 5.250000, y_da: [3.5, 0.0, 0.0], y_db: [1.5, 0.0, 0.0]
&lt;span class="kw"&gt;100&lt;/span&gt; a_list: [2.0, 0.0, 0.0], b_list: [4.0, 0.0, 0.0], y: 8.000000, y_da: [4.0, 0.0, 0.0], y_db: [2.0, 0.0, 0.0]
&lt;span class="kw"&gt;110&lt;/span&gt; a_list: [2.5, 0.0, 0.0], b_list: [4.5, 0.0, 0.0], y: 11.250000, y_da: [4.5, 0.0, 0.0], y_db: [2.5, 0.0, 0.0]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; WordContextProduct.py
 &lt;span class="kw"&gt;20&lt;/span&gt; word: [-2.0, 0.0, 0.0], context: [0.0, 0.0, 0.0], y: 0.000000, y_dword: [0.0, 0.0, 0.0], y_dcontext: [-0.0234375, 0.0, 0.0]
 &lt;span class="kw"&gt;30&lt;/span&gt; word: [-1.5, 0.0, 0.0], context: [0.5, 0.0, 0.0], y: -0.007812, y_dword: [0.0078125, 0.0, 0.0], y_dcontext: [-0.01953125, 0.0, 0.0]
 &lt;span class="kw"&gt;40&lt;/span&gt; word: [-1.0, 0.0, 0.0], context: [1.0, 0.0, 0.0], y: -0.011719, y_dword: [0.01171875, 0.0, 0.0], y_dcontext: [-0.01171875, 0.0, 0.0]
 &lt;span class="kw"&gt;50&lt;/span&gt; word: [-0.5, 0.0, 0.0], context: [1.5, 0.0, 0.0], y: -0.007812, y_dword: [0.01953125, 0.0, 0.0], y_dcontext: [-0.0078125, 0.0, 0.0]
 &lt;span class="kw"&gt;60&lt;/span&gt; word: [0.0, 0.0, 0.0], context: [2.0, 0.0, 0.0], y: 0.000000, y_dword: [0.0234375, 0.0, 0.0], y_dcontext: [0.0, 0.0, 0.0]
 &lt;span class="kw"&gt;70&lt;/span&gt; word: [0.5, 0.0, 0.0], context: [2.5, 0.0, 0.0], y: 1.250000, y_dword: [2.5, 0.0, 0.0], y_dcontext: [0.5, 0.0, 0.0]
 &lt;span class="kw"&gt;80&lt;/span&gt; word: [1.0, 0.0, 0.0], context: [3.0, 0.0, 0.0], y: 3.000000, y_dword: [3.0, 0.0, 0.0], y_dcontext: [1.0, 0.0, 0.0]
 &lt;span class="kw"&gt;90&lt;/span&gt; word: [1.5, 0.0, 0.0], context: [3.5, 0.0, 0.0], y: 5.250000, y_dword: [3.5, 0.0, 0.0], y_dcontext: [1.5, 0.0, 0.0]
&lt;span class="kw"&gt;100&lt;/span&gt; word: [2.0, 0.0, 0.0], context: [4.0, 0.0, 0.0], y: 8.000000, y_dword: [4.0, 0.0, 0.0], y_dcontext: [2.0, 0.0, 0.0]
&lt;span class="kw"&gt;110&lt;/span&gt; word: [2.5, 0.0, 0.0], context: [4.5, 0.0, 0.0], y: 11.250000, y_dword: [4.5, 0.0, 0.0], y_dcontext: [2.5, 0.0, 0.0]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; WordContextUpdated.py
 &lt;span class="kw"&gt;20&lt;/span&gt; word: [-2.0, 0.0, 0.0], context: [0.0, 0.0, 0.0], mse: 1.000000, y: 0.000000, new_word: [-2.0, 0.0, 0.0], new_context: [-0.00390625, 0.0, 0.0]
 &lt;span class="kw"&gt;30&lt;/span&gt; word: [-1.5, 0.0, 0.0], context: [0.5, 0.0, 0.0], mse: 1.015625, y: -0.007812, new_word: [-1.5, 0.0, 0.0], new_context: [0.49609375, 0.0, 0.0]
 &lt;span class="kw"&gt;40&lt;/span&gt; word: [-1.0, 0.0, 0.0], context: [1.0, 0.0, 0.0], mse: 1.023438, y: -0.011719, new_word: [-1.0, 0.0, 0.0], new_context: [1.0, 0.0, 0.0]
 &lt;span class="kw"&gt;50&lt;/span&gt; word: [-0.5, 0.0, 0.0], context: [1.5, 0.0, 0.0], mse: 1.015625, y: -0.007812, new_word: [-0.49609375, 0.0, 0.0], new_context: [1.5, 0.0, 0.0]
 &lt;span class="kw"&gt;60&lt;/span&gt; word: [0.0, 0.0, 0.0], context: [2.0, 0.0, 0.0], mse: 1.000000, y: 0.000000, new_word: [0.00390625, 0.0, 0.0], new_context: [2.0, 0.0, 0.0]
 &lt;span class="kw"&gt;70&lt;/span&gt; word: [0.5, 0.0, 0.0], context: [2.5, 0.0, 0.0], mse: 0.062500, y: 1.250000, new_word: [0.4375, 0.0, 0.0], new_context: [2.48828125, 0.0, 0.0]
 &lt;span class="kw"&gt;80&lt;/span&gt; word: [1.0, 0.0, 0.0], context: [3.0, 0.0, 0.0], mse: 4.000000, y: 3.000000, new_word: [0.390625, 0.0, 0.0], new_context: [2.796875, 0.0, 0.0]
 &lt;span class="kw"&gt;90&lt;/span&gt; word: [1.5, 0.0, 0.0], context: [3.5, 0.0, 0.0], mse: 18.062500, y: 5.250000, new_word: [-0.01171875, 0.0, 0.0], new_context: [2.8515625, 0.0, 0.0]
&lt;span class="kw"&gt;100&lt;/span&gt; word: [2.0, 0.0, 0.0], context: [4.0, 0.0, 0.0], mse: 49.000000, y: 8.000000, new_word: [-0.84375, 0.0, 0.0], new_context: [2.578125, 0.0, 0.0]
&lt;span class="kw"&gt;110&lt;/span&gt; word: [2.5, 0.0, 0.0], context: [4.5, 0.0, 0.0], mse: 105.062500, y: 11.250000, new_word: [-2.18359375, 0.0, 0.0], new_context: [1.8984375, 0.0, 0.0]

 &lt;span class="kw"&gt;10&lt;/span&gt; mse: 0.992188, y: 0.003906, word: [0.0625, 0.02734375, 0.07421875], context: [0.00390625, 0.0234375, 0.06640625]
 &lt;span class="kw"&gt;20&lt;/span&gt; mse: 0.984375, y: 0.007812, word: [0.0625, 0.03125, 0.08203125], context: [0.01171875, 0.02734375, 0.07421875]
 &lt;span class="kw"&gt;30&lt;/span&gt; mse: 0.984375, y: 0.007812, word: [0.0625, 0.03515625, 0.08984375], context: [0.01953125, 0.03125, 0.08203125]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;100&lt;/span&gt; mse: 0.921875, y: 0.039062, word: [0.09375, 0.0625, 0.16796875], context: [0.07421875, 0.05859375, 0.16796875]
 &lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;200&lt;/span&gt; mse: 0.597656, y: 0.226562, word: [0.20703125, 0.1484375, 0.40234375], context: [0.19921875, 0.1484375, 0.40234375]
 &lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;300&lt;/span&gt; mse: 0.097656, y: 0.687500, word: [0.35546875, 0.26171875, 0.703125], context: [0.3515625, 0.26171875, 0.703125]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;400&lt;/span&gt; mse: 0.003906, y: 0.949219, word: [0.421875, 0.30859375, 0.82421875], context: [0.41796875, 0.30859375, 0.82421875]
&lt;span class="kw"&gt;410&lt;/span&gt; mse: 0.000000, y: 0.960938, word: [0.42578125, 0.30859375, 0.828125], context: [0.421875, 0.30859375, 0.828125]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;python&lt;/span&gt; train.py
   &lt;span class="kw"&gt;40&lt;/span&gt; 1 mse_ema: 1.000000, mse: 0.984375, word: [0.09375, 0.0625, 0.0], context: [0.06640625, 0.0546875, 0.07421875]
  &lt;span class="kw"&gt;100&lt;/span&gt; 1 mse_ema: 1.000000, mse: 0.000000, word: [0.1015625, 0.06640625, 0.0078125], context: [0.05078125, 0.0234375, 0.05859375]
  &lt;span class="kw"&gt;160&lt;/span&gt; 1 mse_ema: 0.988281, mse: 0.992188, word: [0.05859375, 0.03125, 0.01953125], context: [0.0625, 0.04296875, 0.0234375]
  &lt;span class="kw"&gt;220&lt;/span&gt; 1 mse_ema: 0.988281, mse: 0.000000, word: [0.06640625, 0.03515625, 0.0234375], context: [0.0234375, 0.03515625, 0.06640625]
  &lt;span class="kw"&gt;280&lt;/span&gt; 1 mse_ema: 0.976562, mse: 0.984375, word: [0.0859375, 0.0390625, 0.0], context: [0.0546875, 0.0625, 0.0078125]
  &lt;span class="kw"&gt;340&lt;/span&gt; 1 mse_ema: 0.976562, mse: 0.000000, word: [0.08984375, 0.046875, 0.0], context: [0.09765625, 0.01953125, 0.03515625]
&lt;span class="kw"&gt;...&lt;/span&gt;
 &lt;span class="kw"&gt;1960&lt;/span&gt; 1 mse_ema: 0.816406, mse: 0.992188, word: [0.0, 0.0546875, 0.015625], context: [0.03515625, 0.0859375, 0.05859375]
 &lt;span class="kw"&gt;2020&lt;/span&gt; 1 mse_ema: 0.820312, mse: 0.000000, word: [0.00390625, 0.0625, 0.0234375], context: [0.00390625, 0.00390625, 0.0234375]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;10000&lt;/span&gt; 5 mse_ema: 0.554688, mse: 0.945312, word: [0.12890625, 0.09765625, 0.06640625], context: [0.12109375, 0.046875, 0.09765625]
&lt;span class="kw"&gt;10060&lt;/span&gt; 5 mse_ema: 0.558594, mse: 0.000000, word: [0.140625, 0.1015625, 0.07421875], context: [0.04296875, 0.0703125, 0.06640625]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;20080&lt;/span&gt; 9 mse_ema: 0.492188, mse: 0.871094, word: [0.01953125, 0.23828125, 0.1484375], context: [0.01171875, 0.1953125, 0.12109375]
&lt;span class="kw"&gt;20140&lt;/span&gt; 9 mse_ema: 0.496094, mse: 0.000000, word: [0.01953125, 0.2578125, 0.16015625], context: [0.02734375, 0.0234375, 0.0078125]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;30040&lt;/span&gt; 14 mse_ema: 0.457031, mse: 0.835938, word: [0.17578125, 0.140625, 0.18359375], context: [0.1796875, 0.140625, 0.19140625]
&lt;span class="kw"&gt;30100&lt;/span&gt; 14 mse_ema: 0.460938, mse: 0.000000, word: [0.19140625, 0.15234375, 0.203125], context: [0.01953125, 0.0390625, 0.0546875]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;40000&lt;/span&gt; 18 mse_ema: 0.328125, mse: 0.628906, word: [0.16796875, 0.35546875, 0.234375], context: [0.17578125, 0.3515625, 0.21875]
&lt;span class="kw"&gt;40060&lt;/span&gt; 18 mse_ema: 0.332031, mse: 0.000000, word: [0.18359375, 0.3828125, 0.25390625], context: [0.05078125, 0.03125, 0.078125]
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;50020&lt;/span&gt; 22 mse_ema: 0.164062, mse: 0.015625, word: [0.23828125, 0.6171875, 0.8046875], context: [0.046875, 0.05859375, 0.09765625]
&lt;span class="kw"&gt;50080&lt;/span&gt; 22 mse_ema: 0.164062, mse: 0.082031, word: [0.01953125, 0.8671875, 0.53515625], context: [0.01953125, 0.59765625, 0.3671875]
&lt;span class="kw"&gt;50140&lt;/span&gt; 22 mse_ema: 0.164062, mse: 0.003906, word: [0.01953125, 0.8828125, 0.546875], context: [0.09375, 0.02734375, 0.06640625]
&lt;span class="kw"&gt;50200&lt;/span&gt; 23 mse_ema: 0.164062, mse: 0.285156, word: [0.5078125, 0.38671875, 0.234375], context: [0.5078125, 0.38671875, 0.23828125]&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="packing-a-list-of-signals-to-a-shadow-vector"&gt;Packing a list of signals to a shadow vector&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;MyHDL&lt;/em&gt; does not support conversion of a list of signals as a port to a module. Attempting convert them to Verilog or VHDL results in:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;myhdl.ConversionError: in file DotProduct.py, line 14:
    List of signals as a port is not supported: y_da_list&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Instead of manipulating bits directly shadow signals provide a read-only higher level abstraction. It is also possible to use them to cast between types.&lt;/p&gt;
&lt;p&gt;Lets suppose in the test bench you manipulate a list of signals &lt;code&gt;a_list&lt;/code&gt; (read-write) and want to pass them all to your module (read-only). To make such code convertible the list of signals must be concatenated into a shadow vector signal.&lt;/p&gt;
&lt;pre class="sourceCode python"&gt;&lt;code class="sourceCode python"&gt;    a_list = [ Signal(fixbv(&lt;span class="fl"&gt;0.0&lt;/span&gt;, &lt;span class="dt"&gt;min&lt;/span&gt;=fix_min, &lt;span class="dt"&gt;max&lt;/span&gt;=fix_max, res=fix_res)) &lt;span class="kw"&gt;for&lt;/span&gt; _ in &lt;span class="dt"&gt;range&lt;/span&gt;(dim) ]
    a_vec = ConcatSignal(*&lt;span class="dt"&gt;reversed&lt;/span&gt;(a_list))
    foo = Foo(y, a_vec)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Inside your module &lt;code&gt;Foo(y, a_vec)&lt;/code&gt; you may want to access individual signals again, so you must assign slices of the vector back to a list of shadow signals.&lt;/p&gt;
&lt;pre class="sourceCode python"&gt;&lt;code class="sourceCode python"&gt;    a_list = [ Signal(fixbv(&lt;span class="fl"&gt;0.0&lt;/span&gt;, &lt;span class="dt"&gt;min&lt;/span&gt;=fix_min, &lt;span class="dt"&gt;max&lt;/span&gt;=fix_max, res=fix_res)) &lt;span class="kw"&gt;for&lt;/span&gt; j in &lt;span class="dt"&gt;range&lt;/span&gt;(dim) ]
    &lt;span class="kw"&gt;for&lt;/span&gt; j in &lt;span class="dt"&gt;range&lt;/span&gt;(dim):
        a_list[j].assign(a_vec((j + &lt;span class="dv"&gt;1&lt;/span&gt;) * fix_width, j * fix_width))&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="unpacking-a-vector-to-a-list-of-shadow-signals"&gt;Unpacking a vector to a list of shadow signals&lt;/h3&gt;
&lt;p&gt;Lets suppose your module outputs a vector &lt;code&gt;y_vec&lt;/code&gt; (read-write), but in your test bench you want to process them as individual signals (read-only). For a convertible code first prepare a sufficiently wide bitwise signal and then assign slices of it into a list of shadow signals &lt;code&gt;y_list&lt;/code&gt;.&lt;/p&gt;
&lt;pre class="sourceCode python"&gt;&lt;code class="sourceCode python"&gt;    y_vec = Signal(intbv(&lt;span class="dv"&gt;0&lt;/span&gt;)[dim * fix_width:])
    y_list = [ Signal(fixbv(&lt;span class="fl"&gt;0.0&lt;/span&gt;, &lt;span class="dt"&gt;min&lt;/span&gt;=fix_min, &lt;span class="dt"&gt;max&lt;/span&gt;=fix_max, res=fix_res)) &lt;span class="kw"&gt;for&lt;/span&gt; j in &lt;span class="dt"&gt;range&lt;/span&gt;(dim) ]
    &lt;span class="kw"&gt;for&lt;/span&gt; j in &lt;span class="dt"&gt;range&lt;/span&gt;(dim):
        y_list[j].assign(y_vec((j + &lt;span class="dv"&gt;1&lt;/span&gt;) * fix_width, j * fix_width))
    foo = Foo(y_vec, a)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Assigning values inside your module &lt;code&gt;Foo(y_vec, a)&lt;/code&gt; is more troublesome, but can be accomplished by doing bitwise assignments at correct offsets.&lt;/p&gt;
&lt;pre class="sourceCode python"&gt;&lt;code class="sourceCode python"&gt;    tmp = fixbv(&lt;span class="fl"&gt;123.0&lt;/span&gt;, &lt;span class="dt"&gt;min&lt;/span&gt;=fix_min, &lt;span class="dt"&gt;max&lt;/span&gt;=fix_max, res=fix_res)
    y_vec.&lt;span class="dt"&gt;next&lt;/span&gt;[(j + &lt;span class="dv"&gt;1&lt;/span&gt;) * fix_width:j * fix_width] = tmp[:]&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;Unfortunately development past current execution in MyHDL simulator is not planned. But in case you fix any bugs or develop new features, feel free to submit a pull request on &lt;a href="http://github.com/gw0/rs-skip-gram-in-myhdl/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2015 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x35;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2015 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This code is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (&lt;em&gt;AGPL-3.0+&lt;/em&gt;). Note that it is mandatory to make all modifications and complete source code publicly available to any user.&lt;/p&gt;
</summary><category term="deep learning"></category><category term="nlp"></category><category term="hardware"></category></entry><entry><title>Learning Representations for Text-level Discourse Parsing</title><link href="http://gw.tnode.com/deep-learning/acl2015-presentation/" rel="alternate"></link><updated>2015-07-28T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-07-24:deep-learning/acl2015-presentation/</id><summary type="html">&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;br /&gt;&lt;br /&gt;&lt;center&gt;
#### Thesis proposal

## Learning Representations
## for Text-level Discourse Parsing

&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;

&lt;small&gt;
Copyright &amp;copy; 2015 *&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;* [&lt;http://gw.tnode.com/&gt;] &amp;lt;&lt;gw.2015@tnode.com&gt;&amp;gt;
&lt;/small&gt;
&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Overview

- motivation

- discourse parsing
  - PDTB-style

- deep learning architectures
  - sequence processing
  - word embeddings

- our approach
  - key ideas
  - guided layer-wise multi-task learning

- progress
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Motivation

- natural language processing (NLP)
  - large pipelines of **independently-constructed components** or subtasks
  - traditionally **hand-engineered sparse features** based on language/domain/task specific knowledge
  - still room for improvement on challenging NLP tasks

- **deep learning architectures**
  - backpropagation could be the one learning algorithm to unify learning of all components
  - latent features/representations are automatically learned as distributed dense vectors
  - surprising results for a number of NLP tasks
&lt;/script&gt;&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Discourse parsing

- **discourse**: a piece of text meant to communicate specific information (clauses, sentences, or even paragraphs)
- understood only in relation to other discourses, their joint meaning is larger than individual unit's meaning alone

&gt; [&lt;span style="color:#cc0000;"&gt;Index arbitrage doesn't work&lt;/span&gt;]&lt;sub&gt;arg1&lt;/sub&gt;,
&gt;
&gt; &lt;u&gt;*and*&lt;/u&gt;
&gt; [&lt;span style="color:#0000cc;"&gt;it scares natural buyers of stock&lt;/span&gt;]&lt;sub&gt;arg2&lt;/sub&gt;.
&gt;
&gt; &lt;small&gt;&amp;mdash; PDTB-style, *id:* 14883, *type:* explicit, *sense:* Expansion.Conjunction&lt;/small&gt;

&lt;!-- --&gt;

&gt; [&lt;span style="color:#0000cc;"&gt;But&lt;/span&gt;]&lt;sub&gt;arg2&lt;/sub&gt;
&gt;
&gt; &lt;u&gt;*if*&lt;/u&gt;
&gt; [&lt;span style="color:#cc0000;"&gt;this prompts others to consider the same thing&lt;/span&gt;]&lt;sub&gt;arg1&lt;/sub&gt;,
&gt;
&gt; &lt;u&gt;*then*&lt;/u&gt;
&gt; [&lt;span style="color:#0000cc;"&gt;it may become much more important&lt;/span&gt;]&lt;sub&gt;arg2&lt;/sub&gt;.
&gt;
&gt; &lt;small&gt;&amp;mdash; PDTB-style, *id:* 14905, *type:* explicit, *sense:* Contingency.Condition&lt;/small&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### PDTB-style examples

&gt; He added
&gt; [&lt;span style="color:#cc0000;"&gt;that "having just one firm do this isn't going to mean a hill of beans&lt;/span&gt;]&lt;sub&gt;arg1&lt;/sub&gt;.
&gt;
&gt; &lt;u&gt;*But*&lt;/u&gt;
&gt; [&lt;span style="color:#0000cc;"&gt;if this prompts others to consider the same thing, then it may become much more important&lt;/span&gt;]&lt;sub&gt;arg2&lt;/sub&gt;."
&gt;
&gt; &lt;small&gt;&amp;mdash; PDTB-style, *id:* 14904, *type:* explicit, *sense:* Comparison.Concession&lt;/small&gt;

&lt;!-- --&gt;

&gt; &lt;small&gt;
&gt; In addition, Black &amp; Decker had said it would sell two other undisclosed Emhart operations if it received the right price. [&lt;span style="color:#cc0000;"&gt;Bostic is one of the previously unnamed units, and the first of the five to be sold.&lt;/span&gt;]&lt;sub&gt;arg1&lt;/sub&gt;
&gt; &lt;/small&gt;
&gt;
&gt; &lt;small&gt;
&gt; [&lt;span style="color:#cc0000;"&gt;The company is still negotiating the sales of the other four units and expects to announce agreements by the end of the year&lt;/span&gt;]&lt;sub&gt;arg1&lt;/sub&gt;.
&gt; [&lt;span style="color:#0000cc;"&gt;The five units generated sales of about $1.3 billion in 1988, almost half of Emhart's $2.3 billion revenue&lt;/span&gt;]&lt;sub&gt;arg2&lt;/sub&gt;.
&gt; Bostic posted 1988 sales of $255 million.
&gt; &lt;/small&gt;
&gt;
&gt; &lt;small&gt;&amp;mdash; PDTB-style, *id:* 12886, *type:* entrel, *sense:* EntRel&lt;/small&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### PDTB-style discourse parsing

- **Penn Discourse Treebank** adopts the predicate-argument view and independence of discourse relations
  - 2159 articles from the Wall Street Journal
  - 4 discourse sense classes, 16 types, 23 subtypes

- also called shallow discourse parsing
  - discourse relations are not connected to each another to form a connected structure (tree or graph)
  - adjacent/non-adjacent units in same/different sentences

- primary goals
  - locate explicit or implicit discourse **connective**
  - locate text spans for **argument 1 and 2**
  - predict **sense** that characterizes the nature of the relation
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Deep learning architectures

- multiple layers of learning blocks stacked on each other
- beginning with raw data, its representation is transformed into increasingly higher and more abstract forms in each layer, until final low-dimensional features for a given task

&lt;center&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/deeper-intuition.jpg" width="85%" alt="Deeper intuition on representation learning." style="border:0; margin:0; padding:7px 14px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Sequence processing

Text documents of different lengths are usually treated as a **sequence of words**:

- transition-based processing mechanisms
- **recurrent neural networks** (RNNs)
  - applying the same set of weights over the sequence (temporal dimension) or structure (tree-based)

&lt;center&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/rnn-diagrams.jpg" width="80%" alt="Various usage diagrams of recurrent neural networks." style="border:0; margin:-30px 0 0 0; padding:7px 14px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Word embeddings

Represent text as numeric vectors of fixed size:

- **word embeddings**: SGNS (word2vec), GloVe, ...
- feature/phrase/document embeddings
- character-level convolutional networks

**Unsupervised** pre-training helps develop natural abstractions.

Sharing word embedding in **multi-task learning** improves their performance in the absence of hand-engineered features.

&lt;center&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/embedding-word.png" width="40%" alt="Word embeddings." style="border:0; margin:0; padding:7px 14px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Our approach

- PDTB-style end-to-end discourse parser
- one deep learning architecture instead of multiple independently-constructed components
- almost without any hand-engineered NLP knowledge

*Input:*

- tokenized text documents (from CoNLL 2015 shared task)

*Output:*

- extracted PDTB-style discourse relations
  - connectives
  - arguments 1 and 2
  - discourse senses
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Key ideas

- **unified end-to-end architecture**
  - backpropagation as the one learning algorithm for all discourse parsing subtasks and related NLP tasks
- **automatic learning of representations**
  - in hidden layers of deep learning architectures (bidirectional deep RNN/LSTM)
- **shared intermediate representations**
  - partially stacked on top of each other to benefit from each others representations
- **guided layer-wise multi-task learning**
  - jointly learning all discourse parsing subtasks and related NLP tasks including unsupervised pre-training
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Guided layer-wise multi-task learning

&lt;center&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/layer-wise-multi-task-learning.png" width="57%" alt="Illustration of our unified end-to-end approach for text-level discourse parsing with guided layer-wise multi-task learning of higher representations." style="border:0; margin:0; padding:7px 14px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Progress

- technology
  - *Python*
  - *Theano*: fast tensor manipulation library
  - *Keras*: modular neural network library

- resources and inputs
  - pre-trained word2vec lookup table (on Google News)
  - tokenized text documents as input
  - POS tags of input tokens

- evaluation (from CoNLL 2015 shared task)
  - performance in terms of precision/recall/F1-score
  - explicit connectives, argument 1, 2 and combined extraction, sense classification, overall
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Complication or useful?

Experiments with single-task learning with bidirectional deep RNN for discourse sense tagging:

&lt;center&gt;&lt;img src="http://gw.tnode.com/deep-learning/img/rnn-sense-tagger.png" width="80%" alt="Bidirectional deep recurrent neural network for disourse sense tagging." style="border:0; margin:0; padding:7px 14px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Single-task results

- long training time for randomly initialized weights
  - lower tasks improve initialization
- overfitting training data
  - more tasks improve generalization

### Future experiments

- various discourse parsing subtasks
- various related NLP tasks (chunking, POS, NER, SRL, ...)
- different representation structures
- different activation, optimization, architectures
- long short-term memory (LSTM)
- neural Turing machines (NTM)
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;br /&gt;&lt;br /&gt;&lt;center&gt;
## Does it make sense?

I would like to hear your *feedback* and *ideas*&lt;br /&gt;
for my thesis proposal.

&lt;br /&gt;

### Thank you

&lt;br /&gt;

&lt;small&gt;&lt;http://gw.tnode.com/deep-learning/acl2015-presentation/&gt;&lt;/small&gt;
&lt;small&gt;
Copyright &amp;copy; 2015 *&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;* [&lt;http://gw.tnode.com/&gt;] &amp;lt;&lt;gw.2015@tnode.com&gt;&amp;gt;
&lt;/small&gt;
&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;
</summary><category term="deep learning"></category><category term="nlp"></category><category term="presentation"></category></entry><entry><title>Docker ubuntu-systemd</title><link href="http://gw.tnode.com/docker/ubuntu-systemd/" rel="alternate"></link><updated>2016-06-16T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-07-02:docker/ubuntu-systemd/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Docker 1.x logo" height="120" src="http://gw.tnode.com/docker/img/docker-1x-logo.png" width="354"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;ubuntu-systemd&lt;/em&gt;&lt;/strong&gt; is a minimal &lt;a href="http://www.docker.com/"&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/a&gt; image built from &lt;a href="http://www.ubuntu.com/"&gt;&lt;em&gt;Ubuntu 15.04&lt;/em&gt;&lt;/a&gt; with &lt;em&gt;systemd&lt;/em&gt; designed for running in an unprivileged container. Main philosophy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;simple to use and maintain, same system management experience&lt;/li&gt;
&lt;li&gt;transparent build process, unlike “official” &lt;em&gt;Ubuntu&lt;/em&gt; images&lt;/li&gt;
&lt;li&gt;treat containers as VMs, multiple processes inside a single container&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can use it as a base for your own &lt;em&gt;Docker&lt;/em&gt; images. Just pull it from &lt;a href="http://hub.docker.com/r/tozd/ubuntu-systemd/"&gt;the &lt;em&gt;Docker hub&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/docker/ubuntu-systemd/"&gt;http://gw.tnode.com/docker/ubuntu-systemd/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/tozd/docker-ubuntu-systemd/"&gt;http://github.com/tozd/docker-ubuntu-systemd/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;ubuntu&lt;/em&gt;, &lt;em&gt;systemd&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-database"&gt;&lt;/i&gt; docker hub: &lt;a href="https://hub.docker.com/r/tozd/ubuntu-systemd/"&gt;https://hub.docker.com/r/tozd/ubuntu-systemd/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;To run &lt;em&gt;systemd&lt;/em&gt; in an unprivileged container a few manual tweaks are currently necessary. Remember to replace &lt;code&gt;/tmp&lt;/code&gt; in following commands to a non-world-readable directory.&lt;/p&gt;
&lt;p&gt;First it depends on the &lt;em&gt;cgroups&lt;/em&gt; directory, at least it needs read-only access to &lt;code&gt;cgroup name=systemd&lt;/code&gt; hierarchy (in &lt;code&gt;/sys/fs/cgroup/systemd&lt;/code&gt;). Lets prepare &lt;strong&gt;one for all&lt;/strong&gt; &lt;em&gt;ubuntu-systemd&lt;/em&gt; containers:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; -p /tmp/cgroup/systemd &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;mount&lt;/span&gt; -t cgroup systemd /tmp/cgroup/systemd -o ro,noexec,nosuid,nodev,none,name=systemd

&lt;span class="co"&gt;# or alternatively:&lt;/span&gt;
$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; -p /tmp/cgroup/systemd &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;mount&lt;/span&gt; --bind /sys/fs/cgroup/systemd /tmp/cgroup/systemd&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next it needs &lt;em&gt;tmpfs&lt;/em&gt; mount points in &lt;code&gt;/run&lt;/code&gt; and &lt;code&gt;/run/lock&lt;/code&gt;. This needs to be prepared &lt;strong&gt;separately for each&lt;/strong&gt; &lt;em&gt;ubuntu-systemd&lt;/em&gt; container:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /tmp/run &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;mount&lt;/span&gt; -t tmpfs tmpfs /tmp/run
$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /tmp/run/lock &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;mount&lt;/span&gt; -t tmpfs tmpfs /tmp/run/lock&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You could also add all mount points permanently to your &lt;code&gt;/etc/fstab&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;systemd  /tmp/cgroup/systemd  cgroup  ro,noexec,nosuid,nodev,none,name=systemd  0  0
tmpfs  /tmp/run  tmpfs  nodev,nosuid,mode=755,size=65536k  0  0
tmpfs  /tmp/run/lock  tmpfs  nodev,nosuid,mode=755,size=65536k  0  0&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then you are &lt;strong&gt;ready to use&lt;/strong&gt; your &lt;em&gt;Docker&lt;/em&gt; container:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -d --name xxx -v /tmp/cgroup:/sys/fs/cgroup:ro -v /tmp/run:/run:rw tozd/ubuntu-systemd
$ &lt;span class="kw"&gt;docker&lt;/span&gt; exec -it xxx /bin/bash&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Please note that &lt;strong&gt;graceful stopping&lt;/strong&gt; and removal of the &lt;em&gt;Docker&lt;/em&gt; container looks a little different now:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;docker&lt;/span&gt; kill --signal SIGPWR xxx &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;docker&lt;/span&gt; stop xxx

$ &lt;span class="kw"&gt;docker&lt;/span&gt; rm -f xxx
$ &lt;span class="kw"&gt;umount&lt;/span&gt; /tmp/run/lock /tmp/run &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;rmdir&lt;/span&gt; /tmp/run
$ &lt;span class="kw"&gt;umount&lt;/span&gt; /tmp/cgroup/systemd &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;rmdir&lt;/span&gt; /tmp/cgroup/systemd /tmp/cgroup&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="build"&gt;Build&lt;/h2&gt;
&lt;h3 id="build-ubuntu-systemd"&gt;Build &lt;code&gt;ubuntu-systemd&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;All instructions from scratch are included in the &lt;code&gt;Dockerfile&lt;/code&gt;. To build it you just need to run:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;git&lt;/span&gt; clone http://github.com/tozd/docker-ubuntu-systemd.git
$ &lt;span class="kw"&gt;docker&lt;/span&gt; build -t tozd/ubuntu-systemd -t tozd/ubuntu-systemd:15.04.0 ./docker-ubuntu-systemd&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="build-debootstrap-minbase.tgz"&gt;Build &lt;code&gt;debootstrap-minbase.tgz&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;The standard &lt;em&gt;debootstrap&lt;/em&gt; tool is used to generate the initial minimal &lt;em&gt;Ubuntu&lt;/em&gt; system. As we are using &lt;em&gt;Docker&lt;/em&gt;, we can build the base image there without installing anything on the host:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mkdir&lt;/span&gt; /tmp/ubuntu-systemd
$ &lt;span class="kw"&gt;docker&lt;/span&gt; run -it --rm --privileged -v /tmp/ubuntu-systemd:/mnt ubuntu /bin/bash

$ &lt;span class="kw"&gt;cd&lt;/span&gt; /mnt
$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; update &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;apt-get&lt;/span&gt; install -y debootstrap

$ &lt;span class="kw"&gt;debootstrap&lt;/span&gt; --variant=minbase --components=main vivid ./rootfs
$ &lt;span class="kw"&gt;rm&lt;/span&gt; -f ./rootfs/var/cache/apt/archives/*.deb ./rootfs/var/cache/apt/archives/partial/*.deb ./rootfs/var/cache/apt/*.bin

$ &lt;span class="kw"&gt;tar&lt;/span&gt; --numeric-owner -zcf &lt;span class="st"&gt;"debootstrap-minbase.tgz"&lt;/span&gt; -C &lt;span class="st"&gt;"./rootfs"&lt;/span&gt; . &lt;span class="kw"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="kw"&gt;rm&lt;/span&gt; -rf &lt;span class="st"&gt;"./rootfs"&lt;/span&gt;
$ &lt;span class="kw"&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/tozd/docker-ubuntu-systemd/issues/"&gt;issue tracker&lt;/a&gt; or even develop it yourself and submit a pull request over &lt;a href="http://github.com/tozd/docker-ubuntu-systemd/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2015 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x35;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2015 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This library is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (AGPL-3.0+). Note that it is mandatory to make all modifications and complete source code of this library publicly available to any user.&lt;/p&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://github.com/docker/docker/pull/13525"&gt;http://github.com/docker/docker/pull/13525&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/maci0/docker-systemd-unpriv/blob/master/Dockerfile"&gt;http://github.com/maci0/docker-systemd-unpriv/blob/master/Dockerfile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/lxc/lxc/blob/master/templates/lxc-debian.in"&gt;http://github.com/lxc/lxc/blob/master/templates/lxc-debian.in&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/docker/docker/blob/master/contrib/mkimage/debootstrap"&gt;http://github.com/docker/docker/blob/master/contrib/mkimage/debootstrap&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/tianon/docker-brew-ubuntu-core/blob/67accc07b2f77dbf00dc4e2d5b90c00abc225ec6/vivid/Dockerfile"&gt;http://github.com/tianon/docker-brew-ubuntu-core/blob/67accc07b2f77dbf00dc4e2d5b90c00abc225ec6/vivid/Dockerfile&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="docker"></category><category term="ubuntu"></category><category term="image"></category></entry><entry><title>Learning Representations for Text-level Discourse Parsing</title><link href="http://gw.tnode.com/deep-learning/acl2015-learning-representations-for-text-level-discourse-parsing/" rel="alternate"></link><updated>2015-08-21T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-04-02:deep-learning/acl2015-learning-representations-for-text-level-discourse-parsing/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="ACL-IJCNLP 2015 logo" height="120" src="http://gw.tnode.com/deep-learning/img/acl2015-logo.jpg" width="450"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="conference-proceeding"&gt;Conference proceeding&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;G. Weiss, “&lt;strong&gt;Learning Representations for Text-level Discourse Parsing&lt;/strong&gt;,” in Proceedings of the ACL-IJCNLP 2015 Student Research Workshop, 2015, pp. 16–21.&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; &lt;a href="http://acl2015.org/"&gt;conference&lt;/a&gt;, &lt;a href="http://www.aclweb.org/anthology/P/P15/"&gt;proceedings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/acl2015weiss-proposal.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/acl2015weiss-presentation.pdf"&gt;presentation&lt;/a&gt;, &lt;a href="http://gw.tnode.com/deep-learning/acl2015-presentation/"&gt;online&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/acl2015weiss-poster.pdf"&gt;poster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/deep-learning/f/acl2015weiss.bib"&gt;bibtex&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;In the proposed doctoral work we will design an end-to-end approach for the challenging NLP task of text-level discourse parsing. Instead of depending on mostly hand-engineered sparse features and independent components for each subtask, we propose a unified approach completely based on deep learning architectures. To train better dense vector representations that capture communicative functions and semantic roles of discourse units and relations between them, we will jointly learn all discourse parsing subtasks at different layers of our stacked architecture and share their intermediate representations. By combining unsupervised training of word embeddings and related NLP tasks with our guided layer-wise multi-task learning of higher representations we hope to reach or even surpass performance of current state-of-the-art methods on annotated English corpora.&lt;/p&gt;
</summary><category term="deep learning"></category><category term="nlp"></category><category term="conference"></category><category term="paper"></category><category term="poster"></category><category term="presentation"></category></entry><entry><title>Issues and workarounds for Debian 8</title><link href="http://gw.tnode.com/debian/issues-and-workarounds-for-debian-8/" rel="alternate"></link><updated>2015-05-05T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-02-04:debian/issues-and-workarounds-for-debian-8/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Debian 8 logo" height="120" src="http://gw.tnode.com/debian/img/debian-8-logo.png" width="248"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="http://www.debian.org/"&gt;&lt;strong&gt;&lt;em&gt;Debian 8&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; brings updates to 66% of packages and switches to &lt;em&gt;systemd&lt;/em&gt; as its default system and service manager. This can result in a couple of issues, so check below for workarounds and solutions.&lt;/p&gt;
&lt;h2 id="systemd-ignores-decrypt_derived-keyscript"&gt;Systemd ignores &lt;code&gt;decrypt_derived&lt;/code&gt; keyscript&lt;/h2&gt;
&lt;p&gt;Many security aware users of &lt;em&gt;Debian&lt;/em&gt; have encrypted root, swap, home, and other partitions using &lt;em&gt;cryptsetup&lt;/em&gt;. Automatic mounting of those partitions at boot time can be configured in &lt;code&gt;/etc/crypttab&lt;/code&gt;. Instead of entering passwords for each partition each time it is possible to use keyscript scripts to use other key sources and automate the process. One of those keyscripts is &lt;code&gt;decrypt_derived&lt;/code&gt; that provides a way to chain encrypted partitions, such that the encryption key is automatically derived from a previously unlocked partition.&lt;/p&gt;
&lt;p&gt;This used to work pretty well on &lt;em&gt;Debian 7&lt;/em&gt; and older that used &lt;em&gt;SysV&lt;/em&gt; service manager:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cat&lt;/span&gt; /etc/crypttab
&lt;span class="kw"&gt;debian_crypt&lt;/span&gt; UUID=08bd04d5-... none luks
&lt;span class="kw"&gt;sdb1_crypt&lt;/span&gt; UUID=a84f890c-... debian_crypt luks,keyscript=decrypt_derived
&lt;span class="kw"&gt;swap_crypt&lt;/span&gt; /dev/sda2 debian_crypt swap,cipher=aes-cbc-essiv:sha256,hash=ripemd160,size=256,keyscript=decrypt_derived&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Although the new &lt;em&gt;systemd&lt;/em&gt; service manager processes the &lt;code&gt;/etc/crypttab&lt;/code&gt; configuration, it unfortunately ignores keyscripts and can not use anything equivalent.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cat&lt;/span&gt; /var/log/syslog
&lt;span class="kw"&gt;Apr&lt;/span&gt; 04 20:37:38 xxx systemd-cryptsetup[544]: Encountered unknown
&lt;span class="kw"&gt;/etc/crypttab&lt;/span&gt; option &lt;span class="st"&gt;'keyscript=/lib/cryptsetup/scripts/decrypt_derived'&lt;/span&gt;, ignoring.&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="workaround"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;Instead of using &lt;code&gt;decrypt_derived&lt;/code&gt; keyscript, you can &lt;strong&gt;save its output into a file&lt;/strong&gt; and put it on the encrypted partition from which it derived. For the above example, where &lt;code&gt;debian_crypt&lt;/code&gt; is the root partition and other keys are derived from it, this means saving it into &lt;code&gt;/.debian_crypt.key&lt;/code&gt; (make sure no one but root can access it). Then specify this as the key file, add a dependency to initially mount this partition, and do not forget to update the boot-time initramfs archive. This at least works for the encrypted root partition on which others depend on and is as secure as using the &lt;code&gt;decrypt_derived&lt;/code&gt; keyscript.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;/lib/cryptsetup/scripts/decrypt_derived&lt;/span&gt; debian_crypt &lt;span class="kw"&gt;&amp;gt;&lt;/span&gt; /.debian_crypt.key
$ &lt;span class="kw"&gt;chown&lt;/span&gt; root:root /.debian_crypt.key
$ &lt;span class="kw"&gt;chmod&lt;/span&gt; 600 /.debian_crypt.key
$ &lt;span class="kw"&gt;cat&lt;/span&gt; /etc/crypttab
&lt;span class="kw"&gt;debian_crypt&lt;/span&gt; UUID=08bd04d5-... none luks
&lt;span class="kw"&gt;sdb1_crypt&lt;/span&gt; UUID=a84f890c-... /.debian_crypt.key luks
&lt;span class="kw"&gt;swap_crypt&lt;/span&gt; /dev/sda2 /.debian_crypt.key swap,cipher=aes-cbc-essiv:sha256,hash=ripemd160,size=256
$ &lt;span class="kw"&gt;grep&lt;/span&gt; CRYPTDISKS_MOUNT /etc/default/cryptdisks
&lt;span class="ot"&gt;CRYPTDISKS_MOUNT=&lt;/span&gt;&lt;span class="st"&gt;"/dev/mapper/debian_crypt"&lt;/span&gt;
$ &lt;span class="kw"&gt;update-initramfs&lt;/span&gt; -u
$ &lt;span class="kw"&gt;reboot&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=618862"&gt;http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=618862&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="systemd-hibernation-with-encrypted-swap-partition"&gt;Systemd hibernation with encrypted swap partition&lt;/h2&gt;
&lt;p&gt;Although suspend to RAM works well on all versions of &lt;em&gt;Debian&lt;/em&gt;, one may also want to hibernate it to disk now and then. Unfortunately this does not work if you are using an encrypted swap partition. It seems that by digging into the initramfs archive scripts and hibernation hooks one could accomplish this in &lt;em&gt;Debian 7&lt;/em&gt; or older, but in &lt;em&gt;Debian 8&lt;/em&gt; with &lt;em&gt;systemd&lt;/em&gt; manager it seems to become more complicated.&lt;/p&gt;
&lt;h3 id="related-1"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=765594"&gt;http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=765594&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://help.ubuntu.com/community/BinaryDriverHowto/Nvidia#Suspend.2BAC8-Hibernation"&gt;http://help.ubuntu.com/community/BinaryDriverHowto/Nvidia#Suspend.2BAC8-Hibernation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="nvidia-graphic-driver-kernel-updates"&gt;Nvidia graphic driver kernel updates&lt;/h2&gt;
&lt;p&gt;Proprietary kernel drivers and apps often cause problems and inconveniences. One of them is recompiling the binary driver each time you switch to a new kernel. If you fail to do that, you will not be able to start the &lt;em&gt;X.org server&lt;/em&gt; graphical window interface and be stuck in the console.&lt;/p&gt;
&lt;h3 id="workaround-1"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;Luckily &lt;em&gt;Debian 8&lt;/em&gt; or older contain a &lt;em&gt;Nvidia DKMS&lt;/em&gt; driver that, when &lt;strong&gt;installed&lt;/strong&gt;, automatically recompiles itself after each kernel update.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;aptitude&lt;/span&gt; install nvidia-kernel-dkms xserver-xorg-video-nvidia&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="skype-error-loading-libgl.so.1"&gt;Skype error loading &lt;code&gt;libGL.so.1&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;In case you are using &lt;em&gt;Skype&lt;/em&gt; on a 64-bit system, it may not even start after updating to &lt;em&gt;Debian 8&lt;/em&gt;. &lt;em&gt;Skype&lt;/em&gt; is looking for an i386 library that is not provided by the &lt;em&gt;Nvidia&lt;/em&gt; package.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ skype
libGL.so.1: cannot open shared object file: No such file or directory&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="workaround-2"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;As an workaround the &lt;em&gt;MESA&lt;/em&gt; i386 library &lt;code&gt;libGL1.so.1&lt;/code&gt; can be used. To use this library for &lt;em&gt;Skype&lt;/em&gt; &lt;strong&gt;create the file&lt;/strong&gt; &lt;code&gt;/etc/ld.so.conf.d/skype.conf&lt;/code&gt; with the following contents and reload shared libraries.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cat&lt;/span&gt; /etc/ld.so.conf.d/skype.conf
&lt;span class="co"&gt;# Skype error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory&lt;/span&gt;
&lt;span class="kw"&gt;/usr/lib/mesa-diverted/i386-linux-gnu&lt;/span&gt;
$ &lt;span class="kw"&gt;ldconfig&lt;/span&gt; -v&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related-2"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://askubuntu.com/questions/257897/error-loading-libgl-so-1"&gt;http://askubuntu.com/questions/257897/error-loading-libgl-so-1&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="black-screen-with-cursor-after-suspend"&gt;Black screen with cursor after suspend&lt;/h2&gt;
&lt;p&gt;When using a proprietary &lt;em&gt;Nvidia&lt;/em&gt; driver, it may happen it doesn’t initialize correctly after returning from a suspend to RAM, so you end up at a black or garbled screen with a cursor. The problem seems to be connected with using &lt;em&gt;OpenGL&lt;/em&gt; effects on a composing desktop environment.&lt;/p&gt;
&lt;h3 id="workaround-3"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;In &lt;em&gt;KDE&lt;/em&gt; under &lt;em&gt;System Settings&lt;/em&gt;/&lt;em&gt;Workspace Appearance and Behavior&lt;/em&gt;/&lt;em&gt;Desktop Effects&lt;/em&gt; those desktop effects can be turned off. Alternatively you can setup a keyboard shortcut (default &lt;kbd&gt;Alt+Shift+F12&lt;/kbd&gt;) to &lt;strong&gt;toggle effects and each time you get stuck&lt;/strong&gt; in a black screen just switch it off and on again, so that everything gets redrawn correctly.&lt;/p&gt;
&lt;h2 id="kde-screensaver-during-a-fullscreen-application"&gt;KDE screensaver during a fullscreen application&lt;/h2&gt;
&lt;p&gt;In &lt;em&gt;KDE 4.11&lt;/em&gt; it sometimes happens that the screensaver turns on although you are watching a movie in fullscreen, e.g. in &lt;em&gt;VLC media player&lt;/em&gt;, from browser with Flash or HTML5 video. In previous versions a workaround script called &lt;code&gt;lightsOn.sh&lt;/code&gt; used to prevent the screensaver from turning on by triggering events while it was running.&lt;/p&gt;
&lt;h3 id="workaround-4"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;So the only way to is to &lt;strong&gt;actually turn it off&lt;/strong&gt;. In &lt;em&gt;KDE&lt;/em&gt; under &lt;em&gt;System Settings&lt;/em&gt;/&lt;em&gt;Hardware&lt;/em&gt;/&lt;em&gt;Display and Monitor&lt;/em&gt;/&lt;em&gt;Screen Locker&lt;/em&gt; turn off &lt;em&gt;Start automatically after # minutes&lt;/em&gt; and turn it back on when you stop watching the fullscreen application.&lt;/p&gt;
&lt;p&gt;You may also try programmatically disabling and enabling the screensaver with:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;xset&lt;/span&gt; -dpms&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;xset&lt;/span&gt; s off
$ &lt;span class="kw"&gt;xset&lt;/span&gt; +dpms&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;xset&lt;/span&gt; s on&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related-3"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://askubuntu.com/questions/193930/how-to-disable-sleep-screensaver-kubuntu-12-04-lts-and-vlc"&gt;http://askubuntu.com/questions/193930/how-to-disable-sleep-screensaver-kubuntu-12-04-lts-and-vlc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="unbootable-with-software-raid1-and-grub2"&gt;Unbootable with software RAID1 and GRUB2&lt;/h2&gt;
&lt;p&gt;There is a bug in &lt;em&gt;Debian Installer&lt;/em&gt; that results in an unbootable system after fresh installation of &lt;em&gt;Debian 8.1&lt;/em&gt;. It happens if you manually partition your disks and setup &lt;strong&gt;all partitions in software RAID array&lt;/strong&gt;, including partition with &lt;code&gt;/boot&lt;/code&gt;. &lt;em&gt;mdadm&lt;/em&gt; will by default create arrays with metadata format version 1.2 (not 0.90). This messes up the &lt;em&gt;GRUB2&lt;/em&gt; installation step so that it doesn’t install itself into the master boot record of hard drives correctly. As a result the system hangs and shows a blank screen where &lt;em&gt;GRUB2&lt;/em&gt; boot loader should appear.&lt;/p&gt;
&lt;h3 id="workaround-5"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;During the installation process, just after you &lt;em&gt;Install the GRUB boot loader on a hard disk&lt;/em&gt;, you should switch to the console (&lt;kbd&gt;Ctrl+Alt+F2&lt;/kbd&gt;) and &lt;strong&gt;force install&lt;/strong&gt; &lt;em&gt;GRUB2&lt;/em&gt; boot loader on all hard drives (not partitions) and update &lt;em&gt;initramfs&lt;/em&gt; (in case it was missing something):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;mount&lt;/span&gt; -t proc proc /target/proc
$ &lt;span class="kw"&gt;chroot&lt;/span&gt; /target
$ &lt;span class="kw"&gt;update-grub&lt;/span&gt;
$ &lt;span class="kw"&gt;grub-install&lt;/span&gt; --recheck /dev/sda
$ &lt;span class="kw"&gt;grub-install&lt;/span&gt; --recheck /dev/sdb
$ &lt;span class="kw"&gt;update-initramfs&lt;/span&gt; -u&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Afterwards switch back to the &lt;em&gt;Debian Installer&lt;/em&gt; (&lt;kbd&gt;Ctrl+Alt+F1&lt;/kbd&gt;) and finalize the installation.&lt;/p&gt;
&lt;h3 id="related-4"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze-p2"&gt;http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze-p2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="dkms-could-not-locate-dkms.conf"&gt;DKMS could not locate &lt;code&gt;dkms.conf&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Each time you upgrade the kernel, you should check whether everything went smooth, especially if you are using proprietary drivers or custom kernel modules. It can happen that the &lt;em&gt;DKMS&lt;/em&gt; framework for automatically recompiling kernel modules gets stuck and consequently some modules won’t work under the new kernel.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;dkms&lt;/span&gt; status
&lt;span class="kw"&gt;Error&lt;/span&gt;! Could not locate dkms.conf file.
&lt;span class="kw"&gt;File&lt;/span&gt;:  does not exist.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Cause of this error is an invalid state in &lt;em&gt;DKMS&lt;/em&gt; build directories &lt;code&gt;/var/lib/dkms&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id="workaround-6"&gt;Workaround&lt;/h3&gt;
&lt;p&gt;A simple solution is to &lt;strong&gt;manually remove all&lt;/strong&gt; &lt;em&gt;DKMS&lt;/em&gt; build directories and &lt;strong&gt;reinstall all&lt;/strong&gt; &lt;code&gt;*-dkms&lt;/code&gt; packages.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;rm&lt;/span&gt; -rf /var/lib/dkms/nvidia /var/lib/dkms/nvidia-current /var/lib/dkms/virtualbox&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;aptitude&lt;/span&gt; search &lt;span class="st"&gt;'~i-dkms'&lt;/span&gt;
&lt;span class="kw"&gt;i&lt;/span&gt;   nvidia-kernel-dkms
&lt;span class="kw"&gt;i&lt;/span&gt;   virtualbox-dkms
$ &lt;span class="kw"&gt;aptitude&lt;/span&gt; reinstall nvidia-kernel-dkms
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;Setting&lt;/span&gt; up nvidia-kernel-dkms (340.65-2) &lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;Loading&lt;/span&gt; new nvidia-current-340.65 DKMS files...
&lt;span class="kw"&gt;Building&lt;/span&gt; only for 3.16.0-4-amd64
&lt;span class="kw"&gt;Building&lt;/span&gt; initial module for 3.16.0-4-amd64
&lt;span class="kw"&gt;Done.&lt;/span&gt;
&lt;span class="kw"&gt;nvidia-current&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;nvidia-uvm.ko&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;DKMS&lt;/span&gt;: install completed.
$ &lt;span class="kw"&gt;aptitude&lt;/span&gt; reinstall virtualbox-dkms
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;Setting&lt;/span&gt; up virtualbox-dkms (4.3.18-dfsg-3+deb8u3) &lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;Loading&lt;/span&gt; new virtualbox-4.3.18 DKMS files...
&lt;span class="kw"&gt;Building&lt;/span&gt; only for 3.16.0-4-amd64
&lt;span class="kw"&gt;Building&lt;/span&gt; initial module for 3.16.0-4-amd64
&lt;span class="kw"&gt;Done.&lt;/span&gt;
&lt;span class="kw"&gt;vboxdrv&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;vboxnetadp.ko&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;vboxnetflt.ko&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;vboxpci.ko&lt;/span&gt;:
&lt;span class="kw"&gt;...&lt;/span&gt;
&lt;span class="kw"&gt;DKMS&lt;/span&gt;: install completed.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Afterwards make sure your kernel modules were recompiled for the new kernel:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;dkms&lt;/span&gt; status
&lt;span class="kw"&gt;nvidia-current&lt;/span&gt;, 340.65, 3.16.0-4-amd64, x86_64: installed
&lt;span class="kw"&gt;virtualbox&lt;/span&gt;, 4.3.18, 3.16.0-4-amd64, x86_64: installed&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related-5"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=695824"&gt;https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=695824&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="debian"></category><category term="issue"></category></entry><entry><title>[Sem3] Generic feature extraction for text categorization</title><link href="http://gw.tnode.com/student/sem3-presentation/" rel="alternate"></link><updated>2015-01-08T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2015-01-04:student/sem3-presentation/</id><summary type="html">&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;br /&gt;&lt;br /&gt;&lt;center&gt;
#### PhD-Sem3

## Generic feature extraction
## for text categorization

&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;

&lt;small&gt;
Copyright &amp;copy; 2015 *&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;* [&lt;http://gw.tnode.com/&gt;] &amp;lt;&lt;gw.2015@tnode.com&gt;&amp;gt;
&lt;/small&gt;
&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Contents

- text categorization
    - traditional framework
    - state of feature extraction
- our approach
    - genetic programming
    - fitness measures
    - primitive building blocks and features
- future work
&lt;/script&gt;&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Text categorization

- increasing amounts of unstructured textual data
- **text categorization**&lt;sup&gt;[1][4]&lt;/sup&gt; has many applications:
    - automatic indexing, taxonomy creation
    - news filtering, document routing
    - authorship attribution
    - spam detection, deception detection
    - sentiment analysis, objectivity estimation
- phases: **preprocessing**, feature selection, machine learning algorithm for text analysis, performance evaluation
- need for more scalable, robust, and domain-independent methods for natural language processing (NLP)&lt;sup&gt;[1]&lt;/sup&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Traditional framework

&lt;center&gt;&lt;img src="http://gw.tnode.com/student/img/text-categorization_02.png" width="365" height="500" alt="Text categorization workflow" style="padding:10px;" /&gt;&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### State of feature extraction

- **based on language/domain/task specific knowledge**
- very high-dimensional feature-vector representation

- well-known methods&lt;sup&gt;[2][4][19-29]&lt;/sup&gt;:
    - tokenization, stop-words removal, HTML tags removal
    - stemming, lemmatization
    - bag-of-words, char or word n-grams, tf-idf weighting
    - latent semantic indexing, multi-word
    - part-of-speech tagging, structural tagging
    - lexicons, thesaurus, domain knowledge
    - hand-crafted regular expressions

- recently representation learning for other NLP tasks is being researched by the neural networks community&lt;sup&gt;[8]&lt;/sup&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Our approach

*Input:*

- unstructured text documents

*Output:*

- more informative feature-vector representation

*Approach:*

- **generic feature extraction method**
- almost without any special NLP or hand-crafted knowledge
- **using genetic programming**
- iteratively try to define and combine primitive features until more complex and informative ones are induced
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Genetic programming

- **general technique for inducing computer programs** by evolving them in a computer&lt;sup&gt;[10][11]&lt;/sup&gt;
- solving problems in a wide range of disciplines
- reformulate a problem using:
    - genetic operators
    - heuristic fitness measure
    - primitive operators, structures, inputs, outputs
- enormous size of the problem space:
    - produce valid programs&lt;sup&gt;[11]&lt;/sup&gt;
    - limit structure&lt;sup&gt;[12]&lt;/sup&gt;
    - accelerate using multiple CPU or GPU&lt;sup&gt;[13]&lt;/sup&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Fitness measures

- must drive evolution towards informative features
- **feature selection or scoring methods**
- with text categorization in mind

*Measures:*

- bi-normal separation&lt;sup&gt;[15]&lt;/sup&gt;
- based on Gini index theory&lt;sup&gt;[16]&lt;/sup&gt;
- based on Naive Bayes learning algorithm&lt;sup&gt;[17]&lt;/sup&gt;
- based on ReliefF&lt;sup&gt;[18]&lt;/sup&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### Primitive building blocks and features

- boolean, integer, and real values, dictionaries
- logic/arithmetic operators, conditional clauses, iteration

- count the occurrences of terms:
    - bag-of-words, char or word n-grams
    - phrases, synonyms, hypernyms
    - match patterns, grammars

- special functions:
    - tf-idf weighting scheme, normalizations
    - measuring similarity and length of segments
    - singular value decomposition
    - function to compute the entropy
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
## Future work

- implement our approach
- experiment with different primitive features
- analyze induced feature extractors
- compare performance on text categorization task with traditional preprocessing and with our method
- publish paper
&lt;/script&gt;&lt;/section&gt;

&lt;section&gt;
&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
### References

&lt;small&gt;[1] C. C. Aggarwal and C. X. Zhai, **Mining Text Data**, vol. 4. Boston, MA: Springer US, 2012.&lt;/small&gt;
&lt;small&gt;[2] V. Gupta and G. S. Lehal, “**A survey of text mining techniques and applications**,” J. Emerg. Technol. Web Intell., vol. 1, no. 1, pp. 60–76, 2009.&lt;/small&gt;
&lt;small&gt;[3] M. K. Dalal and M. A. Zaveri, “**Automatic Text Classification: A Technical Review**,” Int. J. Comput. Appl., vol. 28, pp. 37–40, Aug. 2011.&lt;/small&gt;
&lt;small&gt;[4] F. Sebastiani, “**Machine Learning in Automated Text Categorization**,” ACM Comput. Surv., vol. 34, pp. 1–47, Oct. 2002.&lt;/small&gt;
&lt;small&gt;[5] F. Sebastiani, “**Text Categorization**,” in Text Min. its Appl. (A. Zanasi, ed.), pp. 109–129, WIT Press, 2005.&lt;/small&gt;
&lt;small&gt;[6] T. Joachims, “**Text categorization with support vector machines: Learning with many relevant features**,” in Proc. Eur. Conf. Mach. Learn. (C. Nédellec and C. Rouveirol, eds.), vol. 1398, pp. 137–142, Springer Berlin Heidelberg, 1998.&lt;/small&gt;
&lt;small&gt;[7] Y. Bengio, A. Courville, and P. Vincent, “**Representation learning: a review and new perspectives.**,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, pp. 1798–828, Aug. 2013.&lt;/small&gt;
&lt;small&gt;[8] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “**Natural Language Processing (almost) from Scratch**,” J. Mach. Learn. Res., vol. 12, pp. 2493–2537, 2011.&lt;/small&gt;
&lt;small&gt;[9] T. Mikolov, G. Corrado, K. Chen, and J. Dean, “**Efficient Estimation of Word Representations in Vector Space**,” in Proc. Int. Conf. Learn. Represent. (ICLR 2013), pp. 1–12, 2013.&lt;/small&gt;
&lt;small&gt;[10] J. R. Koza, **Genetic Programming: On the Programming of Computers by Means of Natural Selection**. Cambridge, MA, USA: MIT Press, 1992.&lt;/small&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;small&gt;[11] M. L. Wong and K. S. Leung, “**Evolutionary Program Induction Directed by Logic Grammars**,” Evol. Comput., vol. 5, pp. 143–180, June 1997.&lt;/small&gt;
&lt;small&gt;[12] J. F. Miller and P. Thomson, “**Cartesian Genetic Programming**,” Nat. Comput. Ser., vol. 43, pp. 17–34, 2011.&lt;/small&gt;
&lt;small&gt;[13] S. Harding and W. Banzhaf, “**Fast Genetic Programming on GPUs**,” in Proc. 10th Eur. Conf. Genet. Program. (M. Ebner, M. O’Neill, A. Ekárt, L. Vanneschi, and A. I. Esparcia-Alcázar, eds.), vol. 4445 of Lecture Notes in Computer Science, (Berlin, Heidelberg), pp. 90–101, Springer Berlin Heidelberg, 2007.&lt;/small&gt;
&lt;small&gt;[14] M. Wineberg and F. Oppacher, “**A Representation Scheme to Perform Program Induction in a Canonical Genetic Algorithm**,” in Parallel Probl. Solving from Nature—PPSN III, pp. 291–301, Springer Berlin Heidelberg, 1994.&lt;/small&gt;
&lt;small&gt;[15] G. Forman, “**An Extensive Empirical Study of Feature Selection Metrics for Text Classification**,” J. Mach. Learn. Res., vol. 3, pp. 1289–1305, 2003.&lt;/small&gt;
&lt;small&gt;[16] W. Shang, H. Huang, H. Zhu, Y. Lin, Y. Qu, and Z. Wang, “**A novel feature selection algorithm for text categorization**,” Expert Syst. Appl., vol. 33, pp. 1–5, July 2007.&lt;/small&gt;
&lt;small&gt;[17] J. Chen, H. Huang, S. Tian, and Y. Qu, “**Feature selection for text classification with Naive Bayes**,” Expert Syst. Appl., vol. 36, no. 3, pp. 5432–5435, 2009.&lt;/small&gt;
&lt;small&gt;[18] M. Robnik-Šikonja and I. Kononenko, “**Theoretical and empirical analysis of ReliefF and RReliefF**,” Mach. Learn. J., vol. 53, no. 1-2, pp. 23–69, 2003.&lt;/small&gt;
&lt;small&gt;[19] W. B. Cavnar and J. M. Trenkle, “**N-Gram-Based Text Categorization**,” in Proc. SDAIR-94, 3rd Annu. Symp. Doc. Anal. Inf. Retr., pp. 161–175, 1994.&lt;/small&gt;
&lt;small&gt;[20] D. D. Lewis, “**Feature selection and feature extraction for text categorization**,” in Proc. Work. Speech Nat. Lang. - HLT ’91, (Morristown, NJ, USA), p. 212, Association for Computational Linguistics, 1992.&lt;/small&gt;
&lt;/script&gt;&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;small&gt;[21] S. Scott and S. Matwin, **“Feature engineering for text classification**,” in Proc. ICML-99, 16th Int. Conf. Mach. Learn., vol. 99, pp. 379–388, 1999.&lt;/small&gt;
&lt;small&gt;[22] S.-B. Kim, K.-S. Han, H.-C. Rim, and S. H. Myaeng, **“Some Effective Techniques for Naive Bayes Text Classification**,” IEEE Trans. Knowl. Data Eng., vol. 18, pp. 1457–1466, Nov. 2006.&lt;/small&gt;
&lt;small&gt;[23] W. Zhang, T. Yoshida, and X. Tang, **“Text classification based on multi-word with support vector machine**,” Knowledge-Based Syst., vol. 21, pp. 879–886, Dec. 2008.&lt;/small&gt;
&lt;small&gt;[24] M. Porter, **“An algorithm for suffix stripping**,” Progr. Electron. Libr. Inf. Syst., vol. 14, no. 3, pp. 130–137, 1980.&lt;/small&gt;
&lt;small&gt;[25] W. B. Frakes, **“Stemming Algorithms**,” in Inf. Retr. Boston. (W. B. Frakes and R. Baeza-Yates, eds.), pp. 131–160, Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1992.&lt;/small&gt;
&lt;small&gt;[26] J. Plisson, N. Lavrac, and D. Mladenic, **“A Rule based Approach to Word Lemmatization**,” in Proc. 7th Int. multi-conference Inf. Soc., (Ljubljana), pp. 83–86, Jožef Stefan Institute, 2004.&lt;/small&gt;
&lt;small&gt;[27] A. Aizawa, **“An information-theoretic perspective of tf-idf measures**,” Inf. Process. Manag., vol. 39, pp. 45–65, 2003.&lt;/small&gt;
&lt;small&gt;[28] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, **“Indexing by latent semantic analysis**,” J. Am. Soc. Inf. Sci., vol. 41, pp. 391–407, Sept. 1990.&lt;/small&gt;
&lt;small&gt;[29] W. Zhang, T. Yoshida, and X. Tang, **“A comparative study of TF-IDF, LSI and multi-words for text classification**,” Expert Syst. Appl., vol. 38, pp. 2758–2765, 2011.&lt;/small&gt;
&lt;small&gt;[30] B. Liu, **Sentiment Analysis and Opinion Mining**, vol. 5. Morgan &amp; Claypool Publishers, May 2012.&lt;/small&gt;
&lt;small&gt;[31] G. Vinodhini and R. M. Chandrasekaran, **“Sentiment Analysis and Opinion Mining: A Survey**,” Int. J. Adv. Res. Comput. Sci. Softw. Eng., vol. 2, no. 6, pp. 282– 292, 2012.&lt;/small&gt;
&lt;small&gt;[32] S. Sarawagi, **“Information Extraction**,” Found. Trends Databases, vol. 1, no. 3, pp. 261–377, 2008.&lt;/small&gt;
&lt;/script&gt;&lt;/section&gt;
&lt;/section&gt;

&lt;section data-markdown&gt;&lt;script type="text/template"&gt;
&lt;br /&gt;&lt;br /&gt;&lt;center&gt;

# Thank you

&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;

&lt;small&gt;&lt;http://gw.tnode.com/student/sem3-generic-feature-extraction-for-text-categorization/&gt;&lt;/small&gt;
&lt;small&gt;
Copyright &amp;copy; 2015 *&lt;a href="http://gw.tnode.com/" rel="author"&gt;gw0&lt;/a&gt;* [&lt;http://gw.tnode.com/&gt;] &amp;lt;&lt;gw.2015@tnode.com&gt;&amp;gt;
&lt;/small&gt;
&lt;/center&gt;
&lt;/script&gt;&lt;/section&gt;
</summary><category term="student"></category><category term="nlp"></category><category term="presentation"></category></entry><entry><title>[Sem3] Generic feature extraction for text categorization</title><link href="http://gw.tnode.com/student/sem3-generic-feature-extraction-for-text-categorization/" rel="alternate"></link><updated>2015-01-04T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-12-01:student/sem3-generic-feature-extraction-for-text-categorization/</id><summary type="html">
&lt;h2 id="student-paper"&gt;Student paper&lt;/h2&gt;
&lt;p&gt;Related work review of “Generic feature extraction for text categorization”.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/student/f/sem3-paper.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/student/f/sem3-presentation.pdf"&gt;presentation&lt;/a&gt;, &lt;a href="http://gw.tnode.com/student/sem3-presentation/"&gt;online&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Extracting more informative text features improves the performance of text mining tasks, such as text categorization and sentiment analysis. Nevertheless researchers usually focus on the main part of their problem and just preprocess texts with well-known linguistic and custom methods, that are based on background knowledge about a given language, domain, and specific task. To improve and automate the initial preprocessing phase we propose a generic feature extraction method inspired by inductive programming. Beginning almost without any natural language processing knowledge it will heuristically try to define and combine elemental feature extractors until more complex and informative features are found.&lt;/p&gt;
</summary><category term="student"></category><category term="nlp"></category><category term="paper"></category><category term="presentation"></category></entry><entry><title>Screenshots for glyphs</title><link href="http://gw.tnode.com/screenshots-for-glyphs/" rel="alternate"></link><updated>2015-08-24T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-09-27:screenshots-for-glyphs/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Screenshots for glyphs" height="200" src="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-featured.png" width="409"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Screenshots for glyphs&lt;/em&gt;&lt;/strong&gt; (formerly called &lt;em&gt;Screenshots for Ingress&lt;/em&gt;) is an integrated &lt;strong&gt;glyph hacking recorder&lt;/strong&gt; app for games, such as &lt;a href="http://play.google.com/store/apps/details?id=com.nianticproject.ingress"&gt;&lt;em&gt;Ingress&lt;/em&gt;&lt;/a&gt;–the popular augmented reality massively multiplayer online role-playing mobile game for &lt;em&gt;Android&lt;/em&gt;.&lt;/p&gt;
&lt;!-- - [google play](http://play.google.com/store/apps/details?id=gw0.screenshotsforingress) --&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://gw.tnode.com/screenshots-for-glyphs/f/screenshots-0.9.apk"&gt;download apk v0.9&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://gw.tnode.com/donations/"&gt;donate&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="news"&gt;News&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;new version 0.9 (2015-08-24)&lt;/li&gt;
&lt;li&gt;fix integration with Ingress on Android 5.1.1 and M&lt;/li&gt;
&lt;li&gt;old: temporarily removed from &lt;em&gt;Google Play&lt;/em&gt;, as &lt;em&gt;Niantic Labs&lt;/em&gt; filed a complaint without explaining what exactly is violated&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Screenshots for glyphs&lt;/em&gt; works as a simple floating application that displays the last few screenshots taken. It can therefore be useful for &lt;strong&gt;glyph hacking&lt;/strong&gt; in games, such as &lt;em&gt;Ingress&lt;/em&gt; (in &lt;em&gt;Portal view&lt;/em&gt; long-touch the &lt;em&gt;Hack&lt;/em&gt; button).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On &lt;strong&gt;stock phones&lt;/strong&gt; press the usual screenshot trigger combination (simultaneously press for a second or two one of: &lt;kbd&gt;power + volume down&lt;/kbd&gt;, or &lt;kbd&gt;power + home&lt;/kbd&gt;). Unfortunately this method can be too slow, use screenshots only for one or two glyphs and your brain for the rest.&lt;/li&gt;
&lt;li&gt;On &lt;strong&gt;rooted phones&lt;/strong&gt; screenshots can be taken programmatically much faster just by &lt;kbd&gt;touching&lt;/kbd&gt; the floating icon. Make sure you have working binaries &lt;code&gt;/system/xbin/su&lt;/code&gt; and &lt;code&gt;/system/bin/screencap&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;kbd&gt;Long-touch&lt;/kbd&gt; the floating icon to remove the screenshots and shrink the window.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="notice"&gt;Notice&lt;/h3&gt;
&lt;p&gt;Make sure you have the game &lt;a href="http://play.google.com/store/apps/details?id=com.nianticproject.ingress"&gt;&lt;em&gt;Ingress&lt;/em&gt;&lt;/a&gt; installed. This recorder app automatically launches &lt;em&gt;Ingress&lt;/em&gt; and shows a floating icon somewhere in the top-right corner. If you close &lt;em&gt;Ingress&lt;/em&gt; or switch to another application, the icon disappears as if it were integrated into the game.&lt;/p&gt;
&lt;div class="alert alert-warning" role="alert"&gt;
&lt;strong&gt;Warning!&lt;/strong&gt; Screenshots are automatically deleted after being taken as long as the application is running.
&lt;/div&gt;
&lt;div class="alert alert-success" role="alert"&gt;
&lt;strong&gt;Play Ingress, join Enlightened!&lt;/strong&gt; Please spread this app to all Ingress players and give 5 stars if you like it. Thank you!
&lt;/div&gt;
&lt;div class="alert alert-info" role="alert"&gt;
&lt;strong&gt;Resistance players:&lt;/strong&gt; The app contains the green logo and a notice. Please donate if you want to see this changed. Otherwise giving negative reviews will encourage us to block it to one faction only.
&lt;/div&gt;
&lt;h2 id="other"&gt;Other&lt;/h2&gt;
&lt;h3 id="donations-crowdfunding"&gt;Donations &amp;amp; Crowdfunding&lt;/h3&gt;
&lt;p&gt;&lt;a href="http://gw.tnode.com/donations/"&gt;http://gw.tnode.com/donations/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To support the development of free apps and release them under an open-source license we encourage you to donate a small amount. If every user would pay just $1, this would result in around $1500 of donations. When the amount is reached, all source code will be publicly released. Accepted payment methods are PayPal, credit cards, and bitcoins.&lt;/p&gt;
&lt;iframe frameborder="0" height="366" scrolling="no" src="http://gw.tnode.com/donations/?ig_embed_widget=1&amp;amp;product_no=1" width="214"&gt;&lt;/iframe&gt;
&lt;h3 id="feedback"&gt;Feedback&lt;/h3&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please send an &lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;&amp;#x3f;&amp;#x73;&amp;#x75;&amp;#98;&amp;#106;&amp;#x65;&amp;#x63;&amp;#116;&amp;#x3d;&amp;#x53;&amp;#x63;&amp;#114;&amp;#x65;&amp;#x65;&amp;#110;&amp;#x73;&amp;#104;&amp;#x6f;&amp;#116;&amp;#x73;&amp;#x25;&amp;#50;&amp;#48;&amp;#98;&amp;#x75;&amp;#x67;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x35;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+'email'+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;email (gw.2015 at tnode dot com?subject=Screenshots%20bug)&lt;/noscript&gt; and we’ll see what we can do. Please give 5 stars or donate if you like it. Thank you!&lt;/p&gt;
&lt;h3 id="license"&gt;License&lt;/h3&gt;
&lt;p&gt;Copyright © 2014-15 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x35;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2015 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Screenshots for glyphs&lt;/em&gt; is NOT affiliated in any way with Google Inc., Niantic Labs or Ingress. It is NOT intended for sole use with Ingress and no graphics or other material from Ingress was abused for promotional purposes.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Screenshots for glyphs&lt;/em&gt; does not hack or interfere in any way with the normal behavior of any glyph-hacking game and is not intended to do so. It only helps at the process of remembering glyphs, you still need to create the screenshots yourself, and draw the glyphs.&lt;/p&gt;
&lt;h3 id="screenshots"&gt;Screenshots&lt;/h3&gt;
&lt;div class="row"&gt;
&lt;figure class="col-sm-3 text-center"&gt;
&lt;a href="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-1.png"&gt;&lt;img alt="Screenshots in action" height="300" src="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-1.png" width="226"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Screenshots in action&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class="col-sm-3 text-center"&gt;
&lt;a href="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-2.png"&gt;&lt;img alt="Screenshots before action" height="300" src="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-2.png" width="226"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Screenshots before action&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class="col-sm-3 text-center"&gt;
&lt;a href="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-3.png"&gt;&lt;img alt="Screenshots in game" height="300" src="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-3.png" width="226"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Screenshots in game&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class="col-sm-3 text-center"&gt;
&lt;a href="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-4.png"&gt;&lt;img alt="Screenshots in action again" height="300" src="http://gw.tnode.com/screenshots-for-glyphs/img/screenshots-4.png" width="226"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Screenshots in action again&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/div&gt;
</summary><category term="android"></category><category term="game"></category><category term="usage"></category><category term="tool"></category></entry><entry><title>CyanogenMod 11 on Nexus 5</title><link href="http://gw.tnode.com/android/cyanogenmod-11-on-nexus-5/" rel="alternate"></link><updated>2014-08-10T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-08-08:android/cyanogenmod-11-on-nexus-5/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="CyanogenMod logo" height="75" src="http://gw.tnode.com/android/img/cyanogenmod-11-logo.png" width="285"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;LG Nexus 5&lt;/em&gt;&lt;/strong&gt; is a powerful mobile phone and stock version of &lt;a href="http://www.android.com/"&gt;&lt;em&gt;Android&lt;/em&gt;&lt;/a&gt; has nearly everything. Nevertheless &lt;a href="http://www.cyanogenmod.org/"&gt;&lt;strong&gt;&lt;em&gt;CyanogenMod 11 M9&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; (or newer), based on &lt;em&gt;Android 4.4.4 (KitKat)&lt;/em&gt;, can improve the experience, security, and customization even further.&lt;/p&gt;
&lt;h2 id="preparation"&gt;Preparation&lt;/h2&gt;
&lt;h3 id="requirements"&gt;Requirements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;LG Nexus 5&lt;/em&gt; (Google/LG D820/D821, hammerhead)&lt;/li&gt;
&lt;li&gt;micro USB cable&lt;/li&gt;
&lt;li&gt;backup your data (contacts, calendar, photos, videos…)&lt;/li&gt;
&lt;li&gt;turn off phone encryption (if you enabled it, as it causes problems)&lt;/li&gt;
&lt;li&gt;working &lt;code&gt;adb&lt;/code&gt; and &lt;code&gt;fastboot&lt;/code&gt; from &lt;em&gt;Android SDK Tools&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="download"&gt;Download&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://developer.android.com/sdk/"&gt;&lt;em&gt;Android SDK Tools&lt;/em&gt; and Platform-tools&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://clockworkmod.com/rommanager"&gt;&lt;em&gt;ClockworkMod Recovery 6.0.4.5&lt;/em&gt;&lt;/a&gt; (or &lt;a href="http://twrp.me/"&gt;&lt;em&gt;TWRP Recovery 2.8.7.0&lt;/em&gt;&lt;/a&gt; or newer) for &lt;em&gt;LG Nexus 5&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://download.CyanogenMod.org/?device=hammerhead"&gt;&lt;em&gt;CyanogenMod 11 M9&lt;/em&gt;&lt;/a&gt; (or newer) snapshot for &lt;em&gt;LG Nexus 5&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://wiki.CyanogenMod.org/w/Google_Apps"&gt;&lt;em&gt;Google Apps for CM 11&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="installation"&gt;Installation&lt;/h2&gt;
&lt;p&gt;Installing &lt;em&gt;CyanogenMod 11&lt;/em&gt; on &lt;em&gt;LG Nexus 5&lt;/em&gt; is straightforward as there is actually no need to exploit a security hole in the system. There is also no need to create backup images of your original stock &lt;em&gt;Android&lt;/em&gt;, as all factory versions are &lt;a href="http://developers.google.com/android/nexus/images/"&gt;officially available&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For a thorough description see the usual &lt;a href="http://wiki.CyanogenMod.org/w/Install_CM_for_hammerhead"&gt;&lt;em&gt;CyanogenMod 11&lt;/em&gt; installation instructions&lt;/a&gt;, but the following steps should be sufficient.&lt;/p&gt;
&lt;h3 id="unlocking"&gt;Unlocking&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Reboot into the bootloader&lt;/strong&gt; using USB (alternatively &lt;kbd&gt;volume up + volume down + power&lt;/kbd&gt;) and &lt;strong&gt;unlock&lt;/strong&gt; the bootloader:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; reboot bootloader
$ &lt;span class="kw"&gt;fastboot&lt;/span&gt; oem unlock&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case of permission problems, run commands as root and enable USB debugging on the device. On success an unlocked icon will appear at the bottom of the Google boot screen during reboots.&lt;/p&gt;
&lt;h3 id="recovery-console"&gt;Recovery console&lt;/h3&gt;
&lt;p&gt;Again &lt;strong&gt;reboot into the bootloader&lt;/strong&gt; (as before) and &lt;strong&gt;flash&lt;/strong&gt; previously downloaded advanced recovery console &lt;em&gt;ClockworkMod Recovery&lt;/em&gt; (or &lt;em&gt;TWRP Recovery&lt;/em&gt;):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; reboot bootloader
$ &lt;span class="kw"&gt;fastboot&lt;/span&gt; flash recovery recovery-clockwork-6.0.4.5-hammerhead.img&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Wait for the flashing procedure to complete.&lt;/p&gt;
&lt;h3 id="install-cyanogenmod"&gt;Install CyanogenMod&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Reboot into the recovery&lt;/strong&gt; using USB (alternatively &lt;kbd&gt;volume up + volume down + power&lt;/kbd&gt;, navigate with &lt;kbd&gt;volume up/down&lt;/kbd&gt;, and confirm using &lt;kbd&gt;power&lt;/kbd&gt; button):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; reboot recovery&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;em&gt;ClockworkMod Recovery&lt;/em&gt; select &lt;strong&gt;&lt;em&gt;wipe data/factory reset&lt;/em&gt;&lt;/strong&gt; (navigate with &lt;kbd&gt;volume up/down&lt;/kbd&gt;, and confirm using &lt;kbd&gt;power&lt;/kbd&gt; button). Repeat procedure for &lt;em&gt;wipe cache partition&lt;/em&gt;. In case of problems with encryption also navigate to &lt;em&gt;mounts and storage&lt;/em&gt; and &lt;em&gt;format /data&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Afterwards transfer and install previously downloaded &lt;em&gt;CyanogenMod 11&lt;/em&gt; package by selecting &lt;em&gt;install zip&lt;/em&gt; and using the &lt;strong&gt;&lt;em&gt;install zip from sideload&lt;/em&gt;&lt;/strong&gt; method. Transfer the package over USB with:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; sideload cm-11-20140805-SNAPSHOT-M9-hammerhead.zip&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Wait for the flashing and installation to complete. The procedure is complete if there were no fatal error messages and you regained control over the menu on top.&lt;/p&gt;
&lt;h3 id="install-google-apps"&gt;Install Google Apps&lt;/h3&gt;
&lt;p&gt;As &lt;em&gt;CyanogenMod 11&lt;/em&gt; comes without any apps from Google (especially without &lt;em&gt;Google Play Store&lt;/em&gt;) it is recommended to install them too.&lt;/p&gt;
&lt;p&gt;Again &lt;strong&gt;reboot into the recovery&lt;/strong&gt; (if not there yet) and install previously downloaded &lt;em&gt;Google Apps for CM 11&lt;/em&gt; by selecting &lt;em&gt;install zip&lt;/em&gt; and using the &lt;strong&gt;&lt;em&gt;install zip from sideload&lt;/em&gt;&lt;/strong&gt; method:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; reboot recovery
$ &lt;span class="kw"&gt;adb&lt;/span&gt; sideload gapps-kk-20140606-signed.zip&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="setup-wizard"&gt;Setup wizard&lt;/h3&gt;
&lt;p&gt;Once the installation has finished, you can select &lt;strong&gt;&lt;em&gt;reboot system now&lt;/em&gt;&lt;/strong&gt; and wait for it to boot the first time (takes a few minutes).&lt;/p&gt;
&lt;p&gt;A setup wizard will appear and it is recommended to &lt;em&gt;not setup a Google account yet&lt;/em&gt;, as it can not be configured in detail here and it initiates syncing and restoration procedures.&lt;/p&gt;
&lt;h2 id="security"&gt;Security&lt;/h2&gt;
&lt;h3 id="encrypt-phone"&gt;Encrypt phone&lt;/h3&gt;
&lt;p&gt;Newest Android phones also support a full-device encryption feature to prevent potential thieves from accessing your data. Note that encryption decreases the performance a little and is one-way only (factory reset to turn it off).&lt;/p&gt;
&lt;p&gt;Unfortunately by default the boot-time encryption password is linked with your lock screen PIN or password (other options are unavailable), which may be inconvenient if you are used to an unlock pattern. Nevertheless it is possible to manually enable encryption independent of your lock screen.&lt;/p&gt;
&lt;p&gt;For this first open &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;Lock screen&lt;/em&gt;/&lt;em&gt;Screen security&lt;/em&gt; and &lt;strong&gt;setup your preferred &lt;em&gt;Screen lock&lt;/em&gt; method&lt;/strong&gt; (such as &lt;em&gt;Pattern&lt;/em&gt;). Choose a good pattern as it is not trivial to change it later on.&lt;/p&gt;
&lt;p&gt;Than &lt;strong&gt;enable encryption&lt;/strong&gt; independently on a rooted system through the terminal (choose a long secure password). In case you are using USB you will need to temporarily enable &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;Superuser&lt;/em&gt;/&lt;em&gt;Root access&lt;/em&gt; to &lt;em&gt;Apps and ADB&lt;/em&gt; to gain root privileges (turn it off afterwards):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell
$ &lt;span class="kw"&gt;su&lt;/span&gt;
$ &lt;span class="kw"&gt;vdc&lt;/span&gt; cryptfs enablecrypto inplace [password]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After a few seconds your Android device will reboot and begin with the encryption process (takes around 30 min). On completion your system will reboot and now you will only need to enter the chosen password once for each reboot.&lt;/p&gt;
&lt;h3 id="firewall"&gt;Firewall&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Install &lt;a href="http://play.google.com/store/apps/details?id=dev.ukanth.ufirewall"&gt;&lt;em&gt;AFWall+&lt;/em&gt; (Android Firewall +)&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;allow superuser&lt;/strong&gt; access forever. It is recommended to enable the following &lt;em&gt;Preferences&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;check &lt;em&gt;Enable notifications&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;Active Rules&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;LAN Control&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;IPv6 support&lt;/em&gt; (in case of problems copy systems &lt;code&gt;ip6tables&lt;/code&gt; to app directory)&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;Confirm box on AFWall+ disable&lt;/em&gt; (to prevent accidentally disabling)&lt;/li&gt;
&lt;li&gt;check &lt;em&gt;Enable Firewall Logs&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Do not forget to &lt;em&gt;Enable firewall&lt;/em&gt; and &lt;em&gt;Apply&lt;/em&gt;. Now use apps normally and check logs if any app has internet connection issues (than enable access).&lt;/p&gt;
&lt;p&gt;Note that &lt;em&gt;CM Updater&lt;/em&gt; does not work with &lt;em&gt;CWM 6.0.4.5&lt;/em&gt; therefore there is no need to allow internet access for &lt;em&gt;Settings&lt;/em&gt; and &lt;em&gt;CM Updater&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In case &lt;em&gt;Miracast display&lt;/em&gt; functionality is not working, you may need to use the following custom startup script:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;&lt;span class="ot"&gt;$IPTABLES&lt;/span&gt; &lt;span class="kw"&gt;-A&lt;/span&gt; &lt;span class="st"&gt;"droidwall-wifi"&lt;/span&gt; -p udp --destination 224.0.0.0/16 -j RETURN
&lt;span class="ot"&gt;$IPTABLES&lt;/span&gt; &lt;span class="kw"&gt;-A&lt;/span&gt; &lt;span class="st"&gt;"droidwall-lan"&lt;/span&gt; -p udp -j RETURN
&lt;span class="ot"&gt;$IPTABLES&lt;/span&gt; &lt;span class="kw"&gt;-A&lt;/span&gt; &lt;span class="st"&gt;"droidwall-input"&lt;/span&gt; -p udp --source 192.168.0.0/16 -j RETURN&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="install-and-update"&gt;Install and update&lt;/h3&gt;
&lt;p&gt;You will probably want to setup &lt;em&gt;Google Play Store&lt;/em&gt; to enable simple installation and updating of apps.&lt;/p&gt;
&lt;p&gt;Apps you really want to install:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://play.google.com/store/apps/details?id=dev.ukanth.ufirewall"&gt;&lt;em&gt;AFWall+&lt;/em&gt; (Android Firewall +)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://play.google.com/store/apps/details?id=com.lostnet.fw.free"&gt;&lt;em&gt;LostNet NoRoot Firewall&lt;/em&gt;&lt;/a&gt; (interesting alternative)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Google Camera&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Titanium Backup&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also check settings of apps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Google Play Store&lt;/em&gt; - in &lt;em&gt;Settings&lt;/em&gt;: disable auto-update apps&lt;/li&gt;
&lt;li&gt;&lt;em&gt;YouTube&lt;/em&gt; - in &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;General&lt;/em&gt;: turn off &lt;em&gt;Improve YouTube&lt;/em&gt;, check &lt;em&gt;Limit mobile data usage&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Hangouts&lt;/em&gt; - turn off &lt;em&gt;Improve&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Google+&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Google Maps&lt;/em&gt; - turn off &lt;em&gt;Shake to send feedback&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Phone&lt;/em&gt; - in &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;Advanced&lt;/em&gt;: turn off &lt;em&gt;Call lookup&lt;/em&gt; (turn off &lt;em&gt;Caller ID by Google&lt;/em&gt;, turn off &lt;em&gt;Nearby places&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Google Translate&lt;/em&gt; - in &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;Data usage&lt;/em&gt;: turn off &lt;em&gt;Prefer network text-to-speach&lt;/em&gt;, turn off &lt;em&gt;Improve camera input&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="privacy-settings"&gt;Privacy settings&lt;/h3&gt;
&lt;p&gt;Many security and privacy options are available and you may want to check them out under &lt;em&gt;Settings&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Data usage&lt;/em&gt;: set mobile data limit, enable show Wi-Fi usage&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Mobile networks&lt;/em&gt;/&lt;em&gt;Access Point Names&lt;/em&gt;/&lt;em&gt;Internet APN&lt;/em&gt;: remove proxy setting&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Wireless &amp;amp; networks/More…&lt;/em&gt;/&lt;em&gt;NFC&lt;/em&gt;: turn off&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Wireless &amp;amp; networks/More…&lt;/em&gt;/&lt;em&gt;Tethering &amp;amp; portable hotspot&lt;/em&gt;: setup Wi-Fi hotspot name and password&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Display &amp;amp; lights&lt;/em&gt;/&lt;em&gt;Cast screen&lt;/em&gt;: turn off&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Location&lt;/em&gt;/&lt;em&gt;Google Location Reporting&lt;/em&gt;: turn off for all Google accounts&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Security&lt;/em&gt;/&lt;em&gt;Owner info&lt;/em&gt;: set to something sensible&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Security&lt;/em&gt;/&lt;em&gt;Set up SIM card lock&lt;/em&gt;: lock SIM card&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Security&lt;/em&gt;/&lt;em&gt;Make passwords visible&lt;/em&gt;: uncheck&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Privacy&lt;/em&gt;/&lt;em&gt;Privacy Guard&lt;/em&gt;: check enabled by default, show built-in apps, enable guard on all except &lt;em&gt;Dialer&lt;/em&gt;, long tap to enable location on &lt;em&gt;Maps&lt;/em&gt; and &lt;em&gt;Google Play services&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Privacy&lt;/em&gt;/&lt;em&gt;CyanogenMod statistics&lt;/em&gt;: uncheck enable reporting&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Languages &amp;amp; input&lt;/em&gt;: uncheck Google voice typing&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Languages &amp;amp; input&lt;/em&gt;/&lt;em&gt;Voice Search&lt;/em&gt;: turn off Ok Google hotword detection, turn off offline speech recognition automatic updates, turn off personalized recognition&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Backup &amp;amp; reset&lt;/em&gt;: enable backup my data, uncheck automatic restore&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Superuser&lt;/em&gt;/&lt;em&gt;Settings&lt;/em&gt;: set superuser access for apps only, set PIN protection&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also check out &lt;em&gt;Google Settings&lt;/em&gt; app:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Search &amp;amp; Now&lt;/em&gt;/&lt;em&gt;Google Now&lt;/em&gt;: turn off&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Search &amp;amp; Now&lt;/em&gt;/&lt;em&gt;Accounts &amp;amp; privacy&lt;/em&gt;: turn off commute sharing, search on google.com&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Search &amp;amp; Now&lt;/em&gt;/&lt;em&gt;Accounts &amp;amp; privacy&lt;/em&gt;: turn off Web History, turn off Personal results&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Ads&lt;/em&gt;: reset advertising ID, opt out of interest-based ads&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Android Device Manager&lt;/em&gt;: decide whether to allow remotely locating this device or locking and erasing data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally it is time to connect your device to the internet using Wi-Fi and add your Google and other accounts (carefully specify what data they should sync).&lt;/p&gt;
&lt;h3 id="locking"&gt;Locking&lt;/h3&gt;
&lt;p&gt;Once you tested that everything works and are satisfied with the operating system, it makes sense to again lock the bootloader. This step secures your data as everything will be wiped clean the next time anyone unlocks it again.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reboot into the bootloader&lt;/strong&gt; using USB (alternatively &lt;kbd&gt;volume up + volume down + power&lt;/kbd&gt;) and &lt;strong&gt;lock&lt;/strong&gt; the bootloader:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; reboot bootloader
$ &lt;span class="kw"&gt;fastboot&lt;/span&gt; oem lock&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="other"&gt;Other&lt;/h2&gt;
&lt;h3 id="screenshots"&gt;Screenshots&lt;/h3&gt;
&lt;div class="row"&gt;
&lt;figure class="col-sm-3 text-center"&gt;
&lt;a href="http://gw.tnode.com/android/img/cyanogenmod-11-desktop.png"&gt;&lt;img alt="Screenshot of desktop" height="300" src="http://gw.tnode.com/android/img/cyanogenmod-11-desktop.png" width="226"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Screenshot of desktop&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://wiki.CyanogenMod.org/w/Install_CM_for_hammerhead"&gt;http://wiki.CyanogenMod.org/w/Install_CM_for_hammerhead&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://pocketnow.com/2014/04/17/nexus-5-CyanogenMod-11"&gt;http://pocketnow.com/2014/04/17/nexus-5-CyanogenMod-11&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://developers.google.com/android/nexus/images"&gt;http://developers.google.com/android/nexus/images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://forum.xda-developers.com/showpost.php?p=35514297&amp;amp;postcount=24"&gt;http://forum.xda-developers.com/showpost.php?p=35514297&amp;amp;postcount=24&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="android"></category><category term="phone"></category><category term="flash"></category><category term="crypto"></category><category term="privacy"></category><category term="setup"></category></entry><entry><title>Encrypted partition in Debian 7</title><link href="http://gw.tnode.com/debian/encrypted-partition-in-debian-7/" rel="alternate"></link><updated>2014-08-07T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-08-06:debian/encrypted-partition-in-debian-7/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="LUKS logo" height="112" src="http://gw.tnode.com/debian/img/luks-logo.png" width="330"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;Creating a &lt;strong&gt;&lt;em&gt;LUKS&lt;/em&gt;&lt;/strong&gt; encrypted partition with &lt;a href="http://en.wikipedia.org/wiki/Dm-crypt"&gt;&lt;em&gt;dm-crypt&lt;/em&gt;&lt;/a&gt; on &lt;a href="http://www.debian.org/"&gt;&lt;em&gt;Debian 7&lt;/em&gt;&lt;/a&gt; or similar (such as &lt;em&gt;Ubuntu&lt;/em&gt; or &lt;em&gt;Raspbian&lt;/em&gt;) is simple. Mounting and using it in &lt;em&gt;KDE&lt;/em&gt; is even simpler.&lt;/p&gt;
&lt;h2 id="preparation"&gt;Preparation&lt;/h2&gt;
&lt;h3 id="requirements"&gt;Requirements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;cryptsetup&lt;/em&gt; tool &lt;small&gt;(1.4)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;dm-crypt&lt;/em&gt; support in kernel (by default)&lt;/li&gt;
&lt;li&gt;root or sudo permissions (run below commands as root)&lt;/li&gt;
&lt;li&gt;empty partition&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;aptitude&lt;/span&gt; install cryptsetup&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="determine-right-partition"&gt;Determine right partition&lt;/h3&gt;
&lt;p&gt;Make sure that the target partition or block device is empty, has the right size, and you know its &lt;strong&gt;exact device name&lt;/strong&gt; (eg. &lt;code&gt;/dev/sdb1&lt;/code&gt;). All data on it will be lost therefore double check its device name using commands such as:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;fdisk&lt;/span&gt; -l /dev/sdb
$ &lt;span class="kw"&gt;gdisk&lt;/span&gt; -l /dev/sdb&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="overwrite-with-random-data"&gt;Overwrite with random data&lt;/h3&gt;
&lt;p&gt;To actually remove all previous data from a partition, deleting all files will not be enough as forensics can reconstruct them, therefore one has to &lt;strong&gt;overwrite every single byte&lt;/strong&gt; of the partition. Overwriting them with random data also makes it impossible to distinguish which sectors include encrypted data and which not.&lt;/p&gt;
&lt;p&gt;An obvious way of doing this with the &lt;code&gt;/dev/urandom&lt;/code&gt; random generator is also very very slow and takes days on larger partitions (&amp;gt;10GB):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;dd&lt;/span&gt; if=/dev/urandom of=/dev/sdb1 bs=16M&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A &lt;strong&gt;much faster&lt;/strong&gt; randomization method is to use a temporary encryption layer on the partition and fill it with zeros. Encrypting the zeros with an arbitrary cipher will result in data looking like randomly generated. Choose a temporary random password for this procedure:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksFormat /dev/sdb1

&lt;span class="kw"&gt;WARNING&lt;/span&gt;!
========
&lt;span class="kw"&gt;This&lt;/span&gt; will overwrite data on /dev/sdb1 irrevocably.

&lt;span class="kw"&gt;Are&lt;/span&gt; you sure? (Type uppercase yes)&lt;span class="kw"&gt;:&lt;/span&gt; YES
&lt;span class="kw"&gt;Enter&lt;/span&gt; LUKS passphrase: (random)
&lt;span class="kw"&gt;Verify&lt;/span&gt; passphrase: (random)

$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksOpen /dev/sdb1 sdb1_crypt
&lt;span class="kw"&gt;Enter&lt;/span&gt; passphrase for /dev/sdb1: (random)

$ &lt;span class="kw"&gt;dd&lt;/span&gt; if=/dev/zero of=/dev/mapper/sdb1_crypt bs=16M
&lt;span class="kw"&gt;1464+0&lt;/span&gt; records in
&lt;span class="kw"&gt;1463+0&lt;/span&gt; records out
&lt;span class="kw"&gt;196369113088&lt;/span&gt; bytes (196 GB) &lt;span class="kw"&gt;copied&lt;/span&gt;, 5341.39 s, 36.8 MB/s

$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksClose sdb1_crypt&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="formatting"&gt;Formatting&lt;/h2&gt;
&lt;p&gt;The encrypted partition consists of an encryption layer, such as &lt;em&gt;dm-crypt&lt;/em&gt; with &lt;em&gt;LUKS&lt;/em&gt;, and a file system inside it.&lt;/p&gt;
&lt;h3 id="luks-encryption-volume"&gt;LUKS encryption volume&lt;/h3&gt;
&lt;p&gt;First step of setting up a user-friendly encrypted partition is formatting it as a &lt;em&gt;LUKS&lt;/em&gt; volume. &lt;em&gt;LUKS&lt;/em&gt; specifies a standard secure key management system and format for disk encryption.&lt;/p&gt;
&lt;p&gt;For encryption any cipher, key size, and hashing function supported by the kernel can be used. Unfortunately the default &lt;code&gt;aes-cbc-essiv:sha256&lt;/code&gt; has weaknesses and it is therefore recommended to select a stronger scheme such as &lt;code&gt;aes-xts-plain64&lt;/code&gt; (safe for &amp;gt;1TB, doesn’t dependent on essiv, better tampering protection). To &lt;strong&gt;format the &lt;em&gt;LUKS&lt;/em&gt; volume&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksFormat --cipher aes-xts-plain64 /dev/sdb1

&lt;span class="kw"&gt;WARNING&lt;/span&gt;!
========
&lt;span class="kw"&gt;This&lt;/span&gt; will overwrite data on /dev/sdb1 irrevocably.

&lt;span class="kw"&gt;Are&lt;/span&gt; you sure? (Type uppercase yes)&lt;span class="kw"&gt;:&lt;/span&gt; YES
&lt;span class="kw"&gt;Enter&lt;/span&gt; LUKS passphrase: [password]
&lt;span class="kw"&gt;Verify&lt;/span&gt; passphrase: [password]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because there is nothing you can do if you forget the above password, it is highly recommended to &lt;strong&gt;setup an alternative emergency-restore password&lt;/strong&gt; and put it in a safe place. Luckily &lt;em&gt;LUKS&lt;/em&gt; supports up to 8 different passwords or key slots for unlocking the partition (protected against dictionary attacks using PBKDF2 iteration scheme).&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksAddKey /dev/sdb1 --key-slot 7
&lt;span class="kw"&gt;Enter&lt;/span&gt; any passphrase: [password]
&lt;span class="kw"&gt;Enter&lt;/span&gt; new passphrase for key slot: [emergency]
&lt;span class="kw"&gt;Verify&lt;/span&gt; passphrase: [emergency]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check &lt;em&gt;LUKS&lt;/em&gt; header information if everything went well. It should look something like:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksDump /dev/sdb1
&lt;span class="kw"&gt;LUKS&lt;/span&gt; header information for /dev/sdb1

&lt;span class="kw"&gt;Version&lt;/span&gt;:        1
&lt;span class="kw"&gt;Cipher&lt;/span&gt; name:    aes
&lt;span class="kw"&gt;Cipher&lt;/span&gt; mode:    xts-plain64
&lt;span class="kw"&gt;Hash&lt;/span&gt; spec:      sha1
&lt;span class="kw"&gt;Payload&lt;/span&gt; offset: 4096
&lt;span class="kw"&gt;MK&lt;/span&gt; bits:        256
&lt;span class="kw"&gt;MK&lt;/span&gt; digest:      21 34 ...
&lt;span class="kw"&gt;MK&lt;/span&gt; salt:        5a e6 ...
&lt;span class="kw"&gt;MK&lt;/span&gt; iterations:  30000
&lt;span class="kw"&gt;UUID&lt;/span&gt;:           a84f890c-...

&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 0: ENABLED
        &lt;span class="kw"&gt;Iterations&lt;/span&gt;:             200000
        &lt;span class="kw"&gt;Salt&lt;/span&gt;:                   81 b0 ...
        &lt;span class="kw"&gt;Key&lt;/span&gt; material offset:    8
        &lt;span class="kw"&gt;AF&lt;/span&gt; stripes:             4000
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 1: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 2: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 3: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 4: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 5: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 6: DISABLED
&lt;span class="kw"&gt;Key&lt;/span&gt; Slot 7: ENABLED
        &lt;span class="kw"&gt;Iterations&lt;/span&gt;:             200000
        &lt;span class="kw"&gt;Salt&lt;/span&gt;:                   33 b7 ...
        &lt;span class="kw"&gt;Key&lt;/span&gt; material offset:    1800
        &lt;span class="kw"&gt;AF&lt;/span&gt; stripes:             4000&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="file-system-inside"&gt;File system inside&lt;/h3&gt;
&lt;p&gt;After setting up the encryption layer that can be opened when needed, one should &lt;strong&gt;format a file system inside the &lt;em&gt;LUKS&lt;/em&gt; volume&lt;/strong&gt;. Nowadays &lt;code&gt;ext4&lt;/code&gt; or &lt;code&gt;btrfs&lt;/code&gt; are commonly chosen:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksOpen /dev/sdb1 sdb1_crypt
&lt;span class="kw"&gt;Enter&lt;/span&gt; passphrase for /dev/sdb1: [password]

$ &lt;span class="kw"&gt;mkfs&lt;/span&gt; -t ext4 -L sdb1_crypt /dev/mapper/sdb1_crypt
&lt;span class="kw"&gt;mke2fs&lt;/span&gt; 1.42.5 (29-Jul-2012)
&lt;span class="kw"&gt;Filesystem&lt;/span&gt; label=sdb1_crypt
&lt;span class="kw"&gt;OS&lt;/span&gt; type: Linux
&lt;span class="kw"&gt;Block&lt;/span&gt; size=4096 (log=2)
&lt;span class="kw"&gt;Fragment&lt;/span&gt; size=4096 (log=2)
&lt;span class="ot"&gt;Stride=&lt;/span&gt;0 &lt;span class="kw"&gt;blocks&lt;/span&gt;, Stripe width=0 blocks
&lt;span class="kw"&gt;11993088&lt;/span&gt; inodes, 47941678 blocks
&lt;span class="kw"&gt;2397083&lt;/span&gt; blocks (5.00%) &lt;span class="kw"&gt;reserved&lt;/span&gt; for the super user
&lt;span class="kw"&gt;First&lt;/span&gt; data block=0
&lt;span class="kw"&gt;Maximum&lt;/span&gt; filesystem blocks=4294967296
&lt;span class="kw"&gt;1464&lt;/span&gt; block groups
&lt;span class="kw"&gt;32768&lt;/span&gt; blocks per group, 32768 fragments per group
&lt;span class="kw"&gt;8192&lt;/span&gt; inodes per group
&lt;span class="kw"&gt;Superblock&lt;/span&gt; backups stored on blocks: 
        &lt;span class="kw"&gt;32768&lt;/span&gt;, 98304, ...

&lt;span class="kw"&gt;Allocating&lt;/span&gt; group tables: done
&lt;span class="kw"&gt;Writing&lt;/span&gt; inode tables: done
&lt;span class="kw"&gt;Creating&lt;/span&gt; journal (32768 blocks)&lt;span class="kw"&gt;:&lt;/span&gt; done
&lt;span class="kw"&gt;Writing&lt;/span&gt; superblocks and filesystem accounting information: done

$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksClose sdb1_crypt&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;Now the encrypted partition is ready to be mounted and used.&lt;/p&gt;
&lt;p&gt;If you are using &lt;em&gt;KDE&lt;/em&gt;, the default file manager &lt;em&gt;Dolphin&lt;/em&gt; is capable of mounting the encrypted partition &lt;strong&gt;just by clicking&lt;/strong&gt; on it. Afterwards it can be used just like a normal folder, yet your data will be seamlessly encrypted in the background.&lt;/p&gt;
&lt;p&gt;From command line it can be mounted to &lt;code&gt;/mnt/sdb1&lt;/code&gt; using:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cryptsetup&lt;/span&gt; luksOpen /dev/sdb1 sdb1_crypt
$ &lt;span class="kw"&gt;mount&lt;/span&gt; /dev/mapper/sdb1_crypt /mnt/sdb1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It is also possible to setup &lt;strong&gt;automatic mounting at boot time&lt;/strong&gt; by specifying it in &lt;code&gt;/etc/crypttab&lt;/code&gt; and &lt;code&gt;/etc/fstab&lt;/code&gt;. Afterwards do not forget to update the boot-time initramfs archive.&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;grep&lt;/span&gt; sdb1_crypt /etc/crypttab
&lt;span class="kw"&gt;sdb1_crypt&lt;/span&gt; UUID=a84f890c-... none luks
$ &lt;span class="kw"&gt;grep&lt;/span&gt; sdb1_crypt /etc/fstab
&lt;span class="kw"&gt;/dev/mapper/sdb1_crypt&lt;/span&gt;  /mnt/sdb1  ext4  relatime  0  2
$ &lt;span class="kw"&gt;update-initramfs&lt;/span&gt; -u
$ &lt;span class="kw"&gt;reboot&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.markus-gattol.name/ws/dm-crypt_luks.html"&gt;http://www.markus-gattol.name/ws/dm-crypt_luks.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="debian"></category><category term="crypto"></category><category term="setup"></category></entry><entry><title>Projectors with Android in 2014</title><link href="http://gw.tnode.com/android/projectors-with-android-in-2014/" rel="alternate"></link><updated>2014-06-20T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-06-20:android/projectors-with-android-in-2014/</id><summary type="html">
&lt;p&gt;With the advent of new technologies in 2014, like high resolution TFT LCD displays, bright long-lasting LED lamps, and powerful systems-on-a-chip running &lt;em&gt;Android 4.2&lt;/em&gt; (or similar), &lt;strong&gt;reasonably-priced home theater projectors&lt;/strong&gt; have finally arrived. Five or more years ago good projectors were very loud, expensive (thousands of euros), had lower resolutions (~800x600), but pretty bright light bulbs. In the mean time a series of ultra-cheap mobile mini-projectors appeared that had terribly low resolution (~320x240) and even lower brightness (~20 lumens).&lt;/p&gt;
&lt;p&gt;Broad range of offers and features might confuse buyers and result in purchasing an over-priced and less-suitable projector for their needs. Below we will list the most important features to watch for when buying a new cheap projector with &lt;em&gt;Android 4.2&lt;/em&gt; and a list of examples in online shops.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Target usage scenario&lt;/strong&gt; is watching movies and reading papers on a projection screen or ceiling in a partially dark room. For this a high enough resolution with high uniformity and a bright lamp is needed. A big issue when watching movies is also the fan noise from projectors, so it has to be minimized. One essential way of using the projector is by connecting it to a computer (or other multimedia devices) using HDMI or VGA cables. On the other hand the built-in powerful mini-computer with Android enables streaming of movies or similar content directly from USB storage devices, NAS servers, or even the internet (like &lt;em&gt;YouTube&lt;/em&gt;, &lt;em&gt;PopcornTime&lt;/em&gt;). To avoid wireless network performance issues it is a good idea to connect the projector with an Ethernet cable or have support for larger SD cards or USB disks to temporary store content.&lt;/p&gt;
&lt;h2 id="important-features"&gt;Important features&lt;/h2&gt;
&lt;p&gt;Minimal projector features to watch for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Native resolution: 1280x768&lt;/li&gt;
&lt;li&gt;Lamp: 200 W LED lamp (with 50000 hours life span)&lt;/li&gt;
&lt;li&gt;Brightness: 4500 ANSI lumens&lt;/li&gt;
&lt;li&gt;Contrast ratio: 4000:1 (dynamic)&lt;/li&gt;
&lt;li&gt;Uniformity: 90%&lt;/li&gt;
&lt;li&gt;Noise: 30 dB (low noise)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Minimal system features to watch for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OS: Android 4.2&lt;/li&gt;
&lt;li&gt;CPU: ARM Cortex A9 dual-core 1.5 GHz&lt;/li&gt;
&lt;li&gt;RAM: 1 GB&lt;/li&gt;
&lt;li&gt;Internal storage: 4 GB&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Interfaces:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2x USB port&lt;/li&gt;
&lt;li&gt;2x HDMI port&lt;/li&gt;
&lt;li&gt;VGA port&lt;/li&gt;
&lt;li&gt;Audio L/R out&lt;/li&gt;
&lt;li&gt;SD card slot (up to 32 GB)&lt;/li&gt;
&lt;li&gt;RJ45 port (Ethernet)&lt;/li&gt;
&lt;li&gt;WiFi 802.11 b/g/n&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Other:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DLNA/UPnP support&lt;/li&gt;
&lt;li&gt;Remote controller&lt;/li&gt;
&lt;li&gt;Projection distance: 1.5m-5.5m&lt;/li&gt;
&lt;li&gt;Projection methods: front, rear, ceiling, rear ceiling&lt;/li&gt;
&lt;li&gt;Power consumption: 200 W&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="online-shops"&gt;Online shops&lt;/h2&gt;
&lt;p&gt;Quick comparison of cheap home theater projectors with good price-performance ratio available in online shops. Only missing or noteworthy things are exposed, although it is questionable how to evaluate misleading specifications on similar products.&lt;/p&gt;
&lt;p&gt;Seem good:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;ACME 86+WIFI&lt;/em&gt;, &lt;em&gt;ACME New 86+&lt;/em&gt;, &lt;em&gt;ACME LED86+&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown WiFi?&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/store/product/4000-Lumens-Native1280-800-Built-in-Android-WiFi-3D-LED-Projector-Perfect-For-Home-Theater/515957_1748087029.html"&gt;US$ 355.00 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/Amazing-Display-Full-HD-LED-Home-Theater-Android-Wifi-Projector-Max-4000lumens-LCD-Video-Game-Smart/1561799187.html"&gt;US$ 394.15 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;KT 86+W&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity?&lt;/li&gt;
&lt;li&gt;noise 32 dB&lt;/li&gt;
&lt;li&gt;only WiFi 802.11 b/g&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/Free-shipping-3d-led-projector-full-hd-built-in-android-4-2-2-system-4000-lumens/1863389834.html"&gt;US$ 349.00 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Smart 86&lt;/em&gt;, &lt;em&gt;SEESMART LED86&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity, noise?&lt;/li&gt;
&lt;li&gt;only WiFi 802.11 b/g&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/Wireless-connect-to-iPhone-iPad-Brightest-4000lumens-Built-in-Android-4-2-2-Native-Full-HD/1766425937.html"&gt;US$ 356.00 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/Wireless-connect-to-iPhone-iPad-Brightest-4000lumens-Built-in-Android-4-2-2-Native-Full-HD/940964616.html"&gt;US$ 356.15 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;SmartIdea LED-86+(Wifi)&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity, noise?&lt;/li&gt;
&lt;li&gt;only WiFi 802.11 b/g&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/4000lumens-Android-4-2-Projector-Full-HD-LED-Daytime-Projector-LCD-3D-Wifi-smart-Proyector-with/988643623.html"&gt;US$ 355.00 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Seem poor:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;ATCO CT03H2 New&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity, noise?&lt;/li&gt;
&lt;li&gt;unknown RAM?&lt;/li&gt;
&lt;li&gt;no RJ45 port&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.aliexpress.com/item/ATCO-Full-HD-1080P-3500Lumen-210W-Led-lamp-Android-4-2-WiFi-Smart-4000-1-Portable/679698856.html"&gt;US$ 360.36 at AliExpress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;DroidBeam&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;no RJ45 port&lt;/li&gt;
&lt;li&gt;brightness only 3000 lumens, contrast 2000:1&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.monastiraki.org/m-dual-core-android-4-2-projector-droidbeam-m/?utm_source=google&amp;amp;utm_medium=base&amp;amp;utm_campaign=11%20Jun%202014%2001:18"&gt;US$ 408.56 at Monastaraki&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;EJL EPW58D&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown CPU, RAM?&lt;/li&gt;
&lt;li&gt;brightness only 3000 lumens, contrast 1000:1, uniformity 80%&lt;/li&gt;
&lt;li&gt;only Android 4.0.4&lt;/li&gt;
&lt;li&gt;only WiFi 802.11 b/g&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.linkdelight.com/P0009740-EPW58D-Android-40-Wifi-Smart-1080P-HD-LED-LCD-Home-Cinema-Video-3D-Projector-with-Mini-Wireless-Air-Mouse-Keyboard-Combo.html"&gt;US$ 380.47 at LinkDelight&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.dx.com/p/epw58d-1280-x-800-hd-home-theater-android-projector-w-2-x-hdmi-2-x-usb-vga-tv-av-rj45-sd-272200"&gt;US$ 451.20 at DX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Oley H2&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity, noise?&lt;/li&gt;
&lt;li&gt;brightness only 3000 lumens, contrast 2000:1&lt;/li&gt;
&lt;li&gt;only Android 4.1&lt;/li&gt;
&lt;li&gt;no RJ45 port&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.dx.com/p/oley-h2-android-4-1-1080p-hd-projector-w-wi-fi-memory-1gb-white-242177"&gt;US$ 440.12 at DX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;RuiQ SV-128&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;unknown uniformity, noise?&lt;/li&gt;
&lt;li&gt;unknown CPU, RAM, internal storage?&lt;/li&gt;
&lt;li&gt;brightness only 2600 lumens, contrast 1000:1&lt;/li&gt;
&lt;li&gt;only Android 4.0&lt;/li&gt;
&lt;li&gt;no WiFi&lt;/li&gt;
&lt;li&gt;price: &lt;a href="http://www.dx.com/p/ruiq-sv-128-2600-lumens-android-4-0-lcd-projector-w-hdmi-vga-ypbpr-black-239996"&gt;US$ 383.71 at DX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.alectrosystems.com/video/projection/projector_specs.htm"&gt;http://www.alectrosystems.com/video/projection/projector_specs.htm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="android"></category><category term="projector"></category><category term="comparison"></category></entry><entry><title>[UI-part3] Incremental learning from data streams</title><link href="http://gw.tnode.com/student/ui-part3/" rel="alternate"></link><updated>2014-05-21T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-05-14:student/ui-part3/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=81"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=81&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: izr. prof. dr. Zoran Bosnić&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: English&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-14&lt;/p&gt;
&lt;p&gt;Course overview (part 3):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;comparison of stationary learning vs. incremental learning; streams, design, limitations&lt;/li&gt;
&lt;li&gt;data summarization: statistics, histogram, wavelets, condensing datasets&lt;/li&gt;
&lt;li&gt;incremental learning models: incremental decision trees (VFDT, functional tree leaves), Hoeffding bound&lt;/li&gt;
&lt;li&gt;concept drift/novelty detection, reacting to changes: extreme values, decision structure, frequency, distances&lt;/li&gt;
&lt;li&gt;clustering from data streams: hierarchical, micro, grid clustering,&lt;/li&gt;
&lt;li&gt;evaluation of streaming algorithms&lt;/li&gt;
&lt;li&gt;matrix factorization with temporal dynamics (recommendation systems)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Lectures with hands-on exercises in Statistical package R. Homework problem for grading: competition in modeling given streaming data. Project: will be discussed individually, taking some data related to PhD and doing something incrementally.&lt;/p&gt;
&lt;h2 id="learning-from-data-streams"&gt;Learning from data streams&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-14&lt;/p&gt;
&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Static learning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;fixed dataset&lt;/li&gt;
&lt;li&gt;relational/attributional form&lt;/li&gt;
&lt;li&gt;classification/regression prediction problems&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Incremental learning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;data streams&lt;/em&gt; = changing data&lt;/li&gt;
&lt;li&gt;&lt;em&gt;concept drift&lt;/em&gt; = unpredictable data distribution&lt;/li&gt;
&lt;li&gt;&lt;em&gt;time series&lt;/em&gt; = measurements with timestamps, transformation into relational data is possible&lt;/li&gt;
&lt;li&gt;&lt;em&gt;sliding window&lt;/em&gt; = holds most recent examples
&lt;ul&gt;
&lt;li&gt;sequence-based window – 20 last measurements&lt;/li&gt;
&lt;li&gt;timestamp-based window:
&lt;ul&gt;
&lt;li&gt;linear – 20 samples with 15 min between&lt;/li&gt;
&lt;li&gt;non-linear – 20 samples with increasing space between them&lt;/li&gt;
&lt;li&gt;adaptive – shrinks on demand (for concept drift)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="algorithm-adwin-adaptive-sliding-window"&gt;Algorithm: ADWIN (ADaptive sliding WINdow)&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;split window into two subwindows such that their average is the same (test all combinations)&lt;/li&gt;
&lt;li&gt;drop elements if it is not&lt;/li&gt;
&lt;li&gt;comparison of means from two subwindows must take into account their stability (from how many elements the mean was computed)&lt;/li&gt;
&lt;li&gt;one parameter &lt;span class="math"&gt;\(\delta\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="data-synopsis"&gt;Data synopsis&lt;/h3&gt;
&lt;p&gt;Compress incoming information from a stream to make processing feasible, not store all data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;representative sample: reservoir sampling algorithm&lt;/li&gt;
&lt;li&gt;histograms&lt;/li&gt;
&lt;li&gt;sketches&lt;/li&gt;
&lt;li&gt;discrete transforms and wavelets: Haar wavelets&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="predicting-from-data-streams"&gt;Predicting from data streams&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-21&lt;/p&gt;
&lt;p&gt;Options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;for time series&lt;/em&gt;: statistics, transformations, fit coefficients&lt;/li&gt;
&lt;li&gt;&lt;em&gt;for relational/attributional data&lt;/em&gt;: adapt common learning algorithms, specialized incremental learning algorithms&lt;/li&gt;
&lt;li&gt;&lt;em&gt;transform time series into attributional form&lt;/em&gt;:
&lt;ul&gt;
&lt;li&gt;temporal attributes – periodical attributes based on assumptions what makes sense (eg. daily/weekly sin/cos function)&lt;/li&gt;
&lt;li&gt;historical attributes – eg. measurements at some hours before&lt;/li&gt;
&lt;li&gt;real values – for manual checking&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Incremental models do not require all historical data to be available and can be build incrementally. Incremental learning slides a window through data and updates the model with new examples as they arrive.&lt;/p&gt;
&lt;h3 id="algorithm-very-fast-decision-tree-algorithms-vfdt"&gt;Algorithm: Very fast decision tree algorithms (VFDT)&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;only discrete attributes&lt;/li&gt;
&lt;li&gt;tree is not binary&lt;/li&gt;
&lt;li&gt;split a leaf node only if there is sufficient statistical evidence&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Growing the tree:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;update statistics in the leaf&lt;/li&gt;
&lt;li&gt;evaluate &lt;span class="math"&gt;\(H()\)&lt;/span&gt; on all attributes, compute difference between best two attributes: &lt;span class="math"&gt;\[
\Delta H = H(a_1) - H(a_2)
\]&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;compare Hoeffding bound (Hoeffding, 1963): &lt;span class="math"&gt;\[
\epsilon = \sqrt{\frac{R^2 ln(2 / \delta)}{2n}}
\]&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="concept-drift-detection"&gt;Concept drift detection&lt;/h2&gt;
&lt;p&gt;Distribution of examples or what is being modeled can change over time – eg. as a consequence of a hidden variable or process. We must be cautious to differentiate from temporal noise.&lt;/p&gt;
&lt;p&gt;Categorizations based on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;data management&lt;/em&gt;:
&lt;ul&gt;
&lt;li&gt;full memory (by increasing recent weights)&lt;/li&gt;
&lt;li&gt;partial memory (fixed or adaptive window)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;detection methods&lt;/em&gt;:
&lt;ul&gt;
&lt;li&gt;monitor performance indicators&lt;/li&gt;
&lt;li&gt;compare time-windows&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;adaptation capability&lt;/em&gt;:
&lt;ul&gt;
&lt;li&gt;blind methods (restart at regulated interval)&lt;/li&gt;
&lt;li&gt;informed methods&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Tests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Page-Hinkley test (PH test): compute difference against average, alarm if larger than allowed&lt;/li&gt;
&lt;li&gt;Statistical process control: compute probability of error &lt;span class="math"&gt;\(p_i\)&lt;/span&gt; and its standard deviation &lt;span class="math"&gt;\(s_i\)&lt;/span&gt;, system switches between states according to bounds (“in control”, “warning”, “out of control”)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="clustering-from-data-streams"&gt;Clustering from data streams&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-28&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;online clustering&lt;/li&gt;
&lt;li&gt;evaluating incremental algorithm&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In &lt;strong&gt;online clustering&lt;/strong&gt; the task is to maintain a continuously consistent good clustering using a small amount of memory and time (grouping objects into groups, unsupervised learning).&lt;/p&gt;
&lt;p&gt;Approaches:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;hierarchical clustering&lt;/em&gt;: build micro clusters for cluster features (CF), maintain statistics for each one (not storing examples), later combine them into macro clusters&lt;/li&gt;
&lt;li&gt;&lt;em&gt;partitioning clustering&lt;/em&gt; (k-means)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;density-based clustering&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;grid-based clustering&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;model-based clustering&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="algorithm-birch"&gt;Algorithm: BIRCH&lt;/h3&gt;
&lt;p&gt;Hierarchical clustering approach with acronym meaning Balanced Iterative Reducing and Clustering using Hierarchies (Zhang et al., 1996).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;build a tree&lt;/li&gt;
&lt;li&gt;every node has 6 cluster features and descendant subtrees&lt;/li&gt;
&lt;li&gt;each node summarizes the statistics of descending nodes&lt;/li&gt;
&lt;li&gt;parameters: &lt;span class="math"&gt;\(B\)&lt;/span&gt; - branch factor, &lt;span class="math"&gt;\(T\)&lt;/span&gt; - maximum absorption distance&lt;/li&gt;
&lt;li&gt;put new examples arrive, the closest CF can absorb it if closer than radius &lt;span class="math"&gt;\(T\)&lt;/span&gt;, if not a new CF entry is added, if there is no room split the parent node&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An extension is CluStream (Aggarwal et al., 2003) that maintains just &lt;span class="math"&gt;\(q\)&lt;/span&gt; micro clusters, discards oldest.&lt;/p&gt;
&lt;h2 id="evaluation-of-streaming-algorithms"&gt;Evaluation of streaming algorithms&lt;/h2&gt;
&lt;p&gt;Performance measures: MSE, CA…&lt;/p&gt;
&lt;p&gt;Approach in &lt;strong&gt;batch learning&lt;/strong&gt; is to split data into training set (training set, validation set) and test set.&lt;/p&gt;
&lt;p&gt;Issues in streaming data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset not fixed&lt;/li&gt;
&lt;li&gt;concept drifts&lt;/li&gt;
&lt;li&gt;decision models change over time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Approaches influenced by order of examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;holdout and independent test set&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;predictive sequential (prequential) evaluation&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Solution is to use &lt;em&gt;prequential estimate with forgetting mechanism&lt;/em&gt; (using either time window or fading factors).&lt;/p&gt;
&lt;p&gt;Q-estimate is used to compare two algorithms. Taking a logarithm of accumulated losses provides positive values if first algorithm is better than the second.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
Q_i(A,B) = log(\frac{S^A_i}{S^B_i})
\]&lt;/span&gt;&lt;/p&gt;
</summary><category term="student"></category></entry><entry><title>Sampling promotes community structure in social and information networks</title><link href="http://gw.tnode.com/network-analysis/physicaa2015-sampling-promotes-community-structure-in-social-and-information-networks/" rel="alternate"></link><updated>2015-07-01T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-05-13:network-analysis/physicaa2015-sampling-promotes-community-structure-in-social-and-information-networks/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Journal Physica A logo" height="120" src="http://gw.tnode.com/network-analysis/img/physicaa2015-logo.jpg" width="395"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="scientific-paper"&gt;Scientific paper&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;N. Blagus, L. Šubelj, G. Weiss, and M. Bajec, “&lt;strong&gt;Sampling promotes community structure in social and information networks&lt;/strong&gt;,” Phys. A Stat. Mech. its Appl., p. 15, 2015.&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; &lt;a href="http://www.sciencedirect.com/science/journal/03784371"&gt;journal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/physicaa2015blagus-paper.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/physicaa2015blagus.bib"&gt;bibtex&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Any network studied in the literature is inevitably just a sampled representative of its real-world analogue. Additionally, network sampling is lately often applied to large networks to allow for their faster and more efficient analysis. Nevertheless, the changes in network structure introduced by sampling are still far from understood. In this paper, we study the presence of characteristic groups of nodes in sampled social and information networks. We consider different network sampling techniques including random node and link selection, network exploration and expansion. We first observe that the structure of social networks reveals densely linked groups like communities, while the structure of information networks is better described by modules of structurally equivalent nodes. However, despite these notable differences, the structure of sampled networks exhibits stronger characterization by community-like groups than the original networks, irrespective of their type and consistently across various sampling techniques. Hence, rich community structure commonly observed in social and information networks is to some extent merely an artifact of sampling.&lt;/p&gt;
</summary><category term="network analysis"></category><category term="journal"></category><category term="paper"></category></entry><entry><title>[AA-part3] CUDA architecture</title><link href="http://gw.tnode.com/student/aa-part3/" rel="alternate"></link><updated>2014-05-26T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-05-12:student/aa-part3/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=89"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=89&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: doc. dr. Tomaž Dobravec&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: Slovenian&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-12&lt;/p&gt;
&lt;p&gt;Course overview (part 3):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hardware and programming aspects of CUDA architecture&lt;/li&gt;
&lt;li&gt;solving standard problems and a comparison between serial (CPU) and parallel (GPU) implementation&lt;/li&gt;
&lt;li&gt;basics of the OpenCL programming environment&lt;/li&gt;
&lt;li&gt;CUDA architecture in the context of the OpenCL environment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Učili se bomo CUDA. Seminarska naloga bo napisati program v Java in CUDA, lahko pa tudi Python in OpenCL. Rok in zagovor seminarskih naj bodo 9.6. po pedagoški delavnici.&lt;/p&gt;
&lt;p&gt;Project idea: given Java skeleton for an image processing application, implement Seam Carving using CUDA. Otherwise propose your project.&lt;/p&gt;
&lt;!-- SSH 212.235.189.253, 2022, algoritmi/doktorski --&gt;
&lt;h2 id="arhitektura-cuda"&gt;Arhitektura CUDA&lt;/h2&gt;
&lt;p&gt;Razno:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;host – CPU&lt;/li&gt;
&lt;li&gt;device – GPU&lt;/li&gt;
&lt;li&gt;SM – streaming multi-processor, ima svoj cache, izvaj isto kodo&lt;/li&gt;
&lt;li&gt;celo v skupkih izvajajo isto kodo&lt;/li&gt;
&lt;li&gt;SP – posamezen procesor&lt;/li&gt;
&lt;li&gt;kernel/ščepec – osnovna komponenta kode&lt;/li&gt;
&lt;li&gt;thread/nit – izvajajo kernele, potrebno podati kako se naj izvajajo&lt;/li&gt;
&lt;li&gt;1 blok niti se vedno izvrši na istem SM&lt;/li&gt;
&lt;li&gt;razporedi niti v bloke po največ 512 niti, ki med seboj lahko komunicirajo, z ostalimi pa ne morejo sodelovati (saj se lahko izvajajo hkrati)&lt;/li&gt;
&lt;li&gt;programer pripravi mrezo blokov, ki se potem izvaja, na koncu veš, da se bodo vsi bloki izvršili&lt;/li&gt;
&lt;li&gt;pomnilnik se ne briše med različnimi programi&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-19&lt;/p&gt;
&lt;p&gt;Recommended tool is Nsight (CUDA Eclipse-like IDE):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;always check for returned errors&lt;/li&gt;
&lt;li&gt;files &lt;code&gt;*.cu&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Memory model:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;registri (32-bitni) so na vsak blok (8192), nato se razporedijo med threadi, največ 16 na en ščepec (kernel) pri polni obremenitvi, če compiler ne uspe, ga da v lokalni pomnilnik&lt;/li&gt;
&lt;li&gt;lokalen pomnilnik se uporablja znotraj ščepca, počasen del DRAMa, skupaj 8kB&lt;/li&gt;
&lt;li&gt;shared memory uporabljajo vse niti istega ščepca, če podatek več kot enkrat potrebuješ, se ga splača prekopirati sem, to je ključ hitrega programa, hiter&lt;/li&gt;
&lt;li&gt;global (device) memory, lahko uporablja za komunikacijo med ščepci, počasen DRAM, skupaj par GB&lt;/li&gt;
&lt;li&gt;host (CPU) memory&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kernel grid:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dokler ne konča izvajanja enega kernela, ne gre na drugega&lt;/li&gt;
&lt;li&gt;lightweight context switching med bloki niti, zato lahko več blokov hkrati naloži&lt;/li&gt;
&lt;li&gt;bistvo veliko dobro obteženih blokov&lt;/li&gt;
&lt;li&gt;velikost bloka &amp;gt;=32, saj sicer izvajanje v snopih (warp) manj učinkovito&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="java-native-interface"&gt;Java native interface&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-26&lt;/p&gt;
&lt;p&gt;V Javi ni možno dostopati do posebnih sistemskih klicev kot je CUDA, je pa možno z JNI poklicati C knjižnico, ki nato kliče naprej.&lt;/p&gt;
&lt;pre class="sourceCode java"&gt;&lt;code class="sourceCode java"&gt;&lt;span class="co"&gt;// -Djava.library.path=...&lt;/span&gt;
&lt;span class="dt"&gt;static&lt;/span&gt; {
    System.&lt;span class="fu"&gt;loadLibrary&lt;/span&gt;(&lt;span class="st"&gt;"JNIFirst"&lt;/span&gt;);
}
&lt;span class="kw"&gt;private&lt;/span&gt; &lt;span class="kw"&gt;native&lt;/span&gt; &lt;span class="dt"&gt;static&lt;/span&gt; &lt;span class="dt"&gt;int&lt;/span&gt; &lt;span class="fu"&gt;sestej&lt;/span&gt;(&lt;span class="dt"&gt;int&lt;/span&gt; a, &lt;span class="dt"&gt;int&lt;/span&gt; b);&lt;/code&gt;&lt;/pre&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;&lt;span class="kw"&gt;cd&lt;/span&gt; src
&lt;span class="kw"&gt;javah&lt;/span&gt; -jni JNI
&lt;span class="co"&gt;# (generates JNI.h from JNI.java)&lt;/span&gt;
&lt;span class="co"&gt;# (prepare JNI.c)&lt;/span&gt;
&lt;span class="kw"&gt;gcc&lt;/span&gt; -I&lt;span class="ot"&gt;$JNI_INCLUDE&lt;/span&gt; -c JNI.c -o JNI.o
&lt;span class="kw"&gt;gcc&lt;/span&gt; -dynamiclib -o libJNIFirst.jnilib JNI.o&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="concept-shift"&gt;Concept: Shift&lt;/h3&gt;
&lt;p&gt;How to shift all elements of a vector for 1 element to the left using CUDA.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Avoid simultaneous read and write and synchronize execution using &lt;code&gt;__syncthreads()&lt;/code&gt; (must be outside a conditional clause).&lt;/p&gt;
&lt;pre class="sourceCode c"&gt;&lt;code class="sourceCode c"&gt;__global__ &lt;span class="dt"&gt;void&lt;/span&gt; shiftLeft(&lt;span class="dt"&gt;int&lt;/span&gt; *a) {
    __shared__ &lt;span class="dt"&gt;int&lt;/span&gt; mem[N];  &lt;span class="co"&gt;// for all blocks&lt;/span&gt;
    &lt;span class="dt"&gt;int&lt;/span&gt; idx = threadIdx.x;
    mem[idx] = a[idx];
    __syncthreads();  &lt;span class="co"&gt;// must be outside conditional clauses&lt;/span&gt;
    &lt;span class="kw"&gt;if&lt;/span&gt;(idx &amp;lt; N - &lt;span class="dv"&gt;1&lt;/span&gt;)
        a[idx] = mem[idx + &lt;span class="dv"&gt;1&lt;/span&gt;];
}&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="concept-reduction"&gt;Concept: Reduction&lt;/h3&gt;
&lt;p&gt;How to sum all elements of a vector using CUDA. Parallel operation is called reduction.&lt;/p&gt;
&lt;pre class="sourceCode c"&gt;&lt;code class="sourceCode c"&gt;__global__ &lt;span class="dt"&gt;void&lt;/span&gt; reduce(&lt;span class="dt"&gt;int&lt;/span&gt; *a) {
    &lt;span class="dt"&gt;int&lt;/span&gt; i = threadIdx.x;
    &lt;span class="kw"&gt;for&lt;/span&gt;(&lt;span class="dt"&gt;int&lt;/span&gt; stride=&lt;span class="dv"&gt;1&lt;/span&gt;; stride &amp;lt; N; stride *= &lt;span class="dv"&gt;2&lt;/span&gt;) {
        &lt;span class="kw"&gt;if&lt;/span&gt;(i % stride == &lt;span class="dv"&gt;0&lt;/span&gt;)
            a[&lt;span class="dv"&gt;2&lt;/span&gt;*i] += a[&lt;span class="dv"&gt;2&lt;/span&gt; * i + stride];
        __syncthreads();
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Using shared memory instead of global would speed it up.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: In each iteration we create holes in the vector that still use resources in a warp. By compacting and leaving holes out we can improve this.&lt;/p&gt;
&lt;pre class="sourceCode c"&gt;&lt;code class="sourceCode c"&gt;__global__ &lt;span class="dt"&gt;void&lt;/span&gt; reduce(&lt;span class="dt"&gt;int&lt;/span&gt; *a) {
    &lt;span class="dt"&gt;int&lt;/span&gt; i = threadIdx.x;
    &lt;span class="kw"&gt;extern&lt;/span&gt; __shared__ &lt;span class="dt"&gt;int&lt;/span&gt; mem[];
    mem[i] = a[&lt;span class="dv"&gt;2&lt;/span&gt; * i] + a[&lt;span class="dv"&gt;2&lt;/span&gt; * i + &lt;span class="dv"&gt;1&lt;/span&gt;];
    &lt;span class="kw"&gt;for&lt;/span&gt;(&lt;span class="dt"&gt;int&lt;/span&gt; stride=N/&lt;span class="dv"&gt;4&lt;/span&gt;; stride &amp;gt;= &lt;span class="dv"&gt;1&lt;/span&gt;; stride /= &lt;span class="dv"&gt;2&lt;/span&gt;) {
        __syncthreads();
        &lt;span class="kw"&gt;if&lt;/span&gt;(i &amp;lt; stride)
            mem[i] += mem[i + stride];
    }
    __syncthreads();
    a[&lt;span class="dv"&gt;0&lt;/span&gt;] = mem[&lt;span class="dv"&gt;0&lt;/span&gt;];
}&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="concept-histogram"&gt;Concept: Histogram&lt;/h3&gt;
&lt;p&gt;How to count number of occurences of characters in a string (i.e. histogram).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Problem is simultaneous read and write for a character. Solution is a special CUDA atomic operation.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Neighbouring threads should access neighbouring memory locations, because DRAM works faster when accessed in blocks.&lt;/p&gt;
&lt;pre class="sourceCode c"&gt;&lt;code class="sourceCode c"&gt;__global__ &lt;span class="dt"&gt;void&lt;/span&gt; histogram(&lt;span class="dt"&gt;unsigned&lt;/span&gt; &lt;span class="dt"&gt;char&lt;/span&gt; *str, &lt;span class="dt"&gt;int&lt;/span&gt; len, &lt;span class="dt"&gt;unsigned&lt;/span&gt; &lt;span class="dt"&gt;int&lt;/span&gt; *hist) {
    &lt;span class="dt"&gt;int&lt;/span&gt; i = blockIdx.x * blockDim.x + threadIdx.x;
    &lt;span class="dt"&gt;int&lt;/span&gt; stride = blockDim.x * gridDim.x;
    &lt;span class="kw"&gt;while&lt;/span&gt;(i &amp;lt; len) {
        &lt;span class="co"&gt;//hist[niz[i] - 'a']++;  // wrong&lt;/span&gt;
        atomicAdd(&amp;amp;hist[niz[i] - 'a'], &lt;span class="dv"&gt;1&lt;/span&gt;);
        i += stride;
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Oddaja projekta (izvorna koda, kratko poročilo) in prezentacija 16.6.2014 ob 14:00.&lt;/p&gt;
</summary><category term="student"></category></entry><entry><title>[UI-part2] Inductive logic programming</title><link href="http://gw.tnode.com/student/ui-part2/" rel="alternate"></link><updated>2014-05-07T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-04-02:student/ui-part2/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=81"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=81&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: akad. prof. dr. Ivan Bratko&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: English&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-04-02&lt;/p&gt;
&lt;p&gt;Course overview (part 2):&lt;/p&gt;
&lt;p&gt;Inductive Logic Programming (ILP) is an approach to machine learning in which learned descriptions are represented in logic, typically in predicate logic.This is the most expressive hypothesis language among all the approaches to machine learning. ILP also makes use of “background knowledge”, that is knowledge that is known to the learning program before learning begins. Some basic methods of ILP will be discussed.&lt;/p&gt;
&lt;h2 id="ilp-programming"&gt;ILP Programming&lt;/h2&gt;
&lt;p&gt;See presentation files for all lectures.&lt;/p&gt;
</summary><category term="student"></category></entry><entry><title>[AA-part2] Parallel algorithms</title><link href="http://gw.tnode.com/student/aa-part2/" rel="alternate"></link><updated>2014-05-05T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-03-24:student/aa-part2/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=89"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=89&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: prof. dr. Borut Robič&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: Slovenian&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-24&lt;/p&gt;
&lt;p&gt;Course overview (part 2):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;models of parallel computation: PRAM and versions CRCW, CREW, EREW&lt;/li&gt;
&lt;li&gt;design and performance of parallel algorithms: simulation of models, Brent theorem&lt;/li&gt;
&lt;li&gt;performance guarantees of CRCW, CREW, EREW simulation: O(log n)-time sorting&lt;/li&gt;
&lt;li&gt;class NC: efficiently parallel solvable problems, robustness of NC, is P = NC?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Samo predavanja, ustni izpit (snov iz predavanj in seminarske), a ne bo pisnega izpita ali domačih nalog. Pogledali bomo osnovne pojme iz področja modelov računanja in algoritmov. Ustni izpit kadar zelite (lahko isti dan kot Kodek), gremo skozi snov, intuitivno kar vse to pomeni.&lt;/p&gt;
&lt;h3 id="homework-skupinska-seminarska-naloga"&gt;Homework: Skupinska seminarska naloga&lt;/h3&gt;
&lt;p&gt;Poglej za vsakega od modelov računanja (tudi bolj natančne) kaj je bistveno. Do konca naših predavanj.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;prednosti&lt;/li&gt;
&lt;li&gt;slabosti&lt;/li&gt;
&lt;li&gt;razmišljanje o konkretnem problemu in kateri je primeren zanj&lt;/li&gt;
&lt;li&gt;koliko me stane, ce preidemo iz enega v drugega&lt;/li&gt;
&lt;li&gt;cel svet problemskih prostorov (kot pri zaporednih, npr. nedeterminističnost, aproksimacijski)&lt;/li&gt;
&lt;li&gt;katere topologije so primerne za kateri problem, kako aktivnosti v modelu preslikati na dejansko arhitekturo, mapping algoritma na stroj, da ključne komunikacije ne gredo po najdaljših povezavah&lt;/li&gt;
&lt;li&gt;pri prijavi teme se nekdo veliko s tem ukvarjal&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="uvod-v-modele-računanja"&gt;Uvod v modele računanja&lt;/h2&gt;
&lt;p&gt;Model računanja je abstrakcija, običajno formalno zapisana, ki zavzema vse kaj je bistvenega za algoritem, porabljen čas ali drug vir.&lt;/p&gt;
&lt;p&gt;Modeli:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PRAM&lt;/li&gt;
&lt;li&gt;Turingov stroj&lt;/li&gt;
&lt;li&gt;Lambda račun&lt;/li&gt;
&lt;li&gt;RAM (opisuje Von-Neumannovo arhitekturo)&lt;/li&gt;
&lt;li&gt;Rekurzivne funkcije&lt;/li&gt;
&lt;li&gt;Postov stroj&lt;/li&gt;
&lt;li&gt;Registrski stroj&lt;/li&gt;
&lt;li&gt;DNK računanje&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="turingov-stroj"&gt;Turingov stroj&lt;/h3&gt;
&lt;p&gt;Turingov stroj uporabimo, da ugotovimo, kateri računski problemi so rešljivi in kateri ne. Za nerešljive probleme ne obstaja noben algoritem.&lt;/p&gt;
&lt;p&gt;Kateri računski problemi so rešljivi? S tem se ukvarja &lt;em&gt;teorija izračunljivosti&lt;/em&gt; (computability theory).&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Ali imajo diofantske enačbe ali sistem enačb rešitev v celih številih? Izkaže se, da algoritem ne obstaja (1972), ki bi znal za katerokoli diofantsko enačbo izpisati ali je rešljiva. Teorija izračunljivosti nam pomaga, da se ne lotevamo nerešljivih problemov.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Kakšen čas/prostor je potreben za rešitev rešljivega računskega problema? S tem se ukvarja &lt;em&gt;teorija računske zahtevnosti&lt;/em&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Na primeru urejanja števil lahko ob ustrezni analizi algoritmov odkrijemo, da ni rešljivo hitreje kot v &lt;span class="math"&gt;\(O(n \log n)\)&lt;/span&gt; časa.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Turingov stroj je model &lt;em&gt;zaporednega&lt;/em&gt; računanja.&lt;/p&gt;
&lt;h2 id="model-pram"&gt;Model PRAM&lt;/h2&gt;
&lt;p&gt;Kaj pa vzporedno (paralelno) računanje? Pri takem računanju sodeluje &lt;em&gt;več procesorjev&lt;/em&gt;, ki so sposobnih hkrati reševati en problem in pri tem sodelujejo.&lt;/p&gt;
&lt;p&gt;Glavna težava pri modeliranju takega računanja je bila kako čim bolj verno modelirati &lt;em&gt;čas&lt;/em&gt;, ki se porablja &lt;em&gt;za komunikacijo med procesorji&lt;/em&gt;. Posledično je nastalo &lt;em&gt;več predlogov za model&lt;/em&gt; vzporednega računanja.&lt;/p&gt;
&lt;p&gt;Eden prvih modelov je bil &lt;strong&gt;PRAM&lt;/strong&gt; (parallel random access machine):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;čas potreben za komunikacijo zanemarjamo&lt;/li&gt;
&lt;li&gt;rezultati bodo zato optimistični, torej predstavljajo &lt;em&gt;spodnjo mejo&lt;/em&gt; za časovno zahtevnost vzp. rač.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;[skupni  pomnilnik]    -- potencialno neskončen
 ^     ^         ^
 |     |         |
 v     v         v
[P1]  [P2]  ... [Pn]   -- končno število procesorjev&lt;/code&gt;&lt;/pre&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/model-pram.jpg"&gt;&lt;img alt="Model PRAM" height="452" src="http://gw.tnode.com/student/img/model-pram.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Model PRAM&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;PRAM (parallel random access machine) ima množico procesorjev &lt;span class="math"&gt;\(P_1, P_2, ..., P_n\)&lt;/span&gt;, ki so povezani s skupnim pomnilnikom sestavljenim iz celic. Pri modeliranju detajle za katere ocenimo, da niso pomembni, ignoriramo.&lt;/p&gt;
&lt;p&gt;Model &lt;strong&gt;skupnega pomnilnika&lt;/strong&gt; (značilnosti/idealizacije):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;potencialno neskončen (lahko ga povečaš za koliko želiš, a vsakič končen)&lt;/li&gt;
&lt;li&gt;sestavljen iz pomnilniških celic&lt;/li&gt;
&lt;li&gt;velikosti celic so enake (npr. vse 32- ali 64-bitne), toda potencialno neskončne&lt;/li&gt;
&lt;li&gt;vsaka celica je dostopna vsakemu procesorju &lt;span class="math"&gt;\(P_i\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;dostopni čas enak za vsako celico in vsak procesor &lt;span class="math"&gt;\(P_i\)&lt;/span&gt; (=en korak/enota časa)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Model &lt;strong&gt;procesorjev&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;potencialno neskončno (npr. &lt;span class="math"&gt;\(n, n^2, \sqrt{n}\)&lt;/span&gt; procesorjev)&lt;/li&gt;
&lt;li&gt;vsi procesorji &lt;span class="math"&gt;\(P_i\)&lt;/span&gt; izvajajo isti algoritem/program (čeprav bo kdaj kateri delal tudi kaj drugega)&lt;/li&gt;
&lt;li&gt;vsi lahko dostopajo do vseh pomnilniških celic v enoti časa&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="sočasno-dostopanje"&gt;Sočasno dostopanje&lt;/h3&gt;
&lt;p&gt;Kaj se zgodi, če procesorji sočasno želijo dostopati do iste celice? Odvisno od operacije, ki jo želijo izvesti na celici (branje ali vpis).&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;em&gt;sočasno branje&lt;/em&gt;: načeloma ni težav v modelu (vsebino celice dobijo vsi v naslednjem koraku)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;sočasen vpis&lt;/em&gt;: če vsaj dva želita vpisati različno vsebino, ni jasno čigava vsebina bo obveljala (problem!)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Glede na to ločimo več različic PRAMa:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;EREW PRAM&lt;/strong&gt; (exclusive-read, exclusive-write):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v vsakem koraku: iz iste celice lahko &lt;em&gt;bere kvečjemu 1&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;v vsakem koraku: v isto celico lahko &lt;em&gt;vpiše kvečjemu 1&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;da je to izpolnjeno mora &lt;em&gt;zagotoviti razvijalec&lt;/em&gt; algoritma/programa&lt;/li&gt;
&lt;li&gt;model je najbližje realnosti (še najmanj dela za realizacijo)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;CREW PRAM&lt;/strong&gt; (concurrent-read, exclusive-write):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v vsakem koraku: iz iste celice lahko &lt;em&gt;bere več kot 1&lt;/em&gt; procesor&lt;/li&gt;
&lt;li&gt;v vsakem koraku: v isto celico lahko &lt;em&gt;vpiše kvečjemu 1&lt;/em&gt; procesor&lt;/li&gt;
&lt;li&gt;za to mora zagotoviti razvijalec&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;CRCW PRAM&lt;/strong&gt; (concurrent-read, concurrent-write):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v vsakem koraku: iz iste celice lahko &lt;em&gt;bere več kot 1&lt;/em&gt; procesor&lt;/li&gt;
&lt;li&gt;v vsakem koraku: v isto celico lahko &lt;em&gt;vpiše več kot 1&lt;/em&gt; procesor (o tem kaj se bo zgodilo se je potrebno dogovoriti)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kateri procesor bo zmagal pri sočasnem vpisu pri CRCW PRAM? To določa t.i. &lt;em&gt;način sočasnega vpisa&lt;/em&gt; (mode) pri CRCW:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;skladni vpis&lt;/em&gt; (consistent mode):
&lt;ul&gt;
&lt;li&gt;vsi procesorji morajo vpisati enak podatek v isto celico&lt;/li&gt;
&lt;li&gt;za to mora poskrbeti razvijalec algoritma&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;poljuben vpis&lt;/em&gt; (arbitrary mode):
&lt;ul&gt;
&lt;li&gt;zmagovalec je naključen (za nas)&lt;/li&gt;
&lt;li&gt;to bo moral upoštevati razvijalec (nedeterministično obnašanje)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;prednostni vpis&lt;/em&gt; (priority mode):
&lt;ul&gt;
&lt;li&gt;zmaga procesor z najvišjo prioriteto (npr. z najnižjim indeksom)&lt;/li&gt;
&lt;li&gt;to bo moral upoštevati razvijalec&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;združeni vpis&lt;/em&gt; (fusion mode):
&lt;ul&gt;
&lt;li&gt;nad podatki se izvede operacija, njen rezultat pa se vpiše v celico&lt;/li&gt;
&lt;li&gt;operacija (z dvema operandoma) mora biti &lt;em&gt;komutativna&lt;/em&gt; in &lt;em&gt;asociativna&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;npr. disjunkcija, konjunkcija, max, min, množenje, seštevanje (odštevanje ni komutativno)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="vzporedni-algoritmi"&gt;Vzporedni algoritmi&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-31&lt;/p&gt;
&lt;p&gt;Tekom predavanj želimo pokazati, da je vseeno katerega od modelov izberemo, saj lahko le-ti drug drugega simulirajo. Cena za simulacijo je logaritemska. Pri vzporednem računanju nas zanimajo poli-logaritmični algoritmi &lt;span class="math"&gt;\(\log^p(n)\)&lt;/span&gt;, ne polinomski &lt;span class="math"&gt;\(n^k\)&lt;/span&gt;.&lt;/p&gt;
&lt;h3 id="tehnika-preusmerjanje-kazalcev-pointer-jumping"&gt;Tehnika: Preusmerjanje kazalcev (pointer jumping)&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Preusmerjanje kazalcev&lt;/em&gt; je tehnika/metoda za razvoj vzporednih algoritmov. Posplošitev metode “deli in vladaj” iz razvoja zaporednih algoritmov.&lt;/p&gt;
&lt;h4 id="primer-rangiranje-seznamov-list-ranking"&gt;Primer: Rangiranje seznamov (list ranking)&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt; (rangiranje seznama): Dan je seznam L z vozlišči (linked list &lt;code&gt;o-&amp;gt;o-&amp;gt;o-&amp;gt;...-&amp;gt;o&lt;/code&gt;, vsak vsebuje vrednost &lt;span class="math"&gt;\(d[i]\)&lt;/span&gt; in kazalec &lt;span class="math"&gt;\(next[i]\)&lt;/span&gt;). Ne vemo koliko je vozlišč in premikamo se lahko le na naslednjika. Za vsako vozlišče seznama L izračunaj &lt;em&gt;rang&lt;/em&gt; (=razdalja tega vozlišča do konca seznama). Razdalja zadnjega do konca je &lt;span class="math"&gt;\(0\)&lt;/span&gt;. Npr.: &lt;code&gt;L-&amp;gt;[3]-&amp;gt;[2]-&amp;gt;[1]-&amp;gt;[0]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Zaporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ideja: začnemo na začetku seznama, se sprehodimo do konca in nazajgrede štejemo rang&lt;/li&gt;
&lt;li&gt;potrebna dva sprehoda od začetka do konca seznama&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(2n\)&lt;/span&gt; obiskov vozlišč =&amp;gt; asimptotična časovna zahtevnost &lt;span class="math"&gt;\(\Theta(n)\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ali obstaja vzporedni alg. za PRAM, ki ima (asimptotično) časovno zahtevnost manjšo od &lt;span class="math"&gt;\(\Theta(n)\)&lt;/span&gt; (npr. &lt;span class="math"&gt;\(\Theta(\sqrt(n)))\)&lt;/span&gt;, &lt;span class="math"&gt;\(\Theta(\log^p(n)))\)&lt;/span&gt;)? Da. Obstaja EREW PRAM z &lt;span class="math"&gt;\(n\)&lt;/span&gt; procesorji in ima časovno zahtevnost &lt;span class="math"&gt;\(O(log(n))\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vzporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;seznam je končen in sistem dovolj velik&lt;/li&gt;
&lt;li&gt;vsakemu vozlišču priredimo lastno PE&lt;/li&gt;
&lt;li&gt;ideja: v vsakem koraku alg. se naj seznam razdeli na 2 podseznama, pri tem upoštevamo učinek trganja na vrednosti &lt;span class="math"&gt;\(d\)&lt;/span&gt; (trganje je preusmerjanje kazalcev)&lt;/li&gt;
&lt;li&gt;najprej vsako vozlišče dobi vrednost 1, zadnje 0&lt;/li&gt;
&lt;li&gt;lastnost vozlišč:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
d[i] = \begin{cases}
  0 &amp;amp; \text{if } next[i] = nil \\
  1 + d[next[i]] &amp;amp; \text{else}
\end{cases}
\]&lt;/span&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;RankComputation(L):
  forall i inparallel do
    if next[i] = nil then d[i] &amp;lt;- 0 else d[i] &amp;lt;- 1
  while obstaja voišče i: next[i] != nil do
    forall i inparallel do
      if next[i] != nil then
        d[i] &amp;lt;- d[i] + d[next[i]]
        next[i] &amp;lt;- next[next[i]]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Opombe:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;forall i inparallel do&lt;/code&gt; stavek izvedejo paralelno hkrati vsi tisti procesorji katerih vozlišča so omenjena&lt;/li&gt;
&lt;li&gt;prevajalnik mora poskrbeti, da se branja izvedejo pred vpisi
&lt;ul&gt;
&lt;li&gt;iz: &lt;code&gt;forall i inparallel do A[i] &amp;lt;- B[i]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;v: &lt;code&gt;forall i inparallel do temp[i] &amp;lt;- B[i]; forall i inparallel do A[i] &amp;lt;- temp[i]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;problem nastane pri: &lt;code&gt;forall i inparallel do A[i] &amp;lt;- A[i-1]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;za to poskrbi prevajalnik&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;pogoj &lt;code&gt;while obstaja voišče i&lt;/code&gt; se da preveriti na:
&lt;ul&gt;
&lt;li&gt;CRCW PRAM z združenim vpisom v konstantnem času &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;EREW PRAM pa je potreben čas &lt;span class="math"&gt;\(O(log(n))\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="primer-računanje-predpon-prefix-computation"&gt;Primer: Računanje predpon (prefix computation)&lt;/h4&gt;
&lt;p&gt;Tudi tu se uporablja preusmerjanje kazalcev.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt; (računanje predpon): Dano je zaporedje &lt;span class="math"&gt;\(x_1,...x_n\)&lt;/span&gt; in binarna asociativna operacija &lt;span class="math"&gt;\(\otimes\)&lt;/span&gt;. Izračunati je potrebno vrednosti &lt;span class="math"&gt;\(y_1,...y_n\)&lt;/span&gt;, kjer je &lt;span class="math"&gt;\(y_1 = x_1\)&lt;/span&gt;, &lt;span class="math"&gt;\(y_{i} = y_{i-1} \otimes x_i\)&lt;/span&gt; za &lt;span class="math"&gt;\(i &amp;gt;= 2\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Npr.: &lt;span class="math"&gt;\(y_1 = x_1\)&lt;/span&gt;, &lt;span class="math"&gt;\(y_2 = x_1 \otimes x_2\)&lt;/span&gt;, &lt;span class="math"&gt;\(y_3 = x_1 \otimes x_2 \otimes x_3, ...\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Zaporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;zahteva &lt;span class="math"&gt;\(O(n)\)&lt;/span&gt; časa&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Se da na vzp. način hitreje? Da. V času &lt;span class="math"&gt;\(O(log(n))\)&lt;/span&gt; celo na EREW PRAM z &lt;span class="math"&gt;\(n\)&lt;/span&gt; PE.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vzporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;vhodne podatke &lt;span class="math"&gt;\(x_1,...x_n\)&lt;/span&gt; shranimo v seznamu L (linked list)&lt;/li&gt;
&lt;li&gt;metoda: preusmerjanje kazalcev&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;PrefixComputation(L):
  forall i inparallel do
    y[i] &amp;lt;- x[i]
  while obstaja vozlišče i: next[i] != nil do
    forall i inparallel do
      if next[i] != nil then
        y[next[i]] &amp;lt;- y[i] * y[next[i]]
        next[i] &amp;lt;- next[next[i]]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Opomba:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pogoj &lt;code&gt;while obstaja voišče i&lt;/code&gt; se da preveriti na:
&lt;ul&gt;
&lt;li&gt;CRCW PRAM v &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;EREW PRAM (z &lt;span class="math"&gt;\(n\)&lt;/span&gt; PE) pa v &lt;span class="math"&gt;\(O(log(n))\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Problem uporabe računanja predpon&lt;/em&gt;: Dano je dvojiško drevo z &lt;span class="math"&gt;\(n\)&lt;/span&gt; vozlišči (koren na globini &lt;span class="math"&gt;\(0\)&lt;/span&gt;). Izračunaj globino vsakega vozlišča tega drevesa.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Z zaporednim alg. v &lt;span class="math"&gt;\(O(n)\)&lt;/span&gt; kateri od obhodov drevesa (depth-first, breadth-first).&lt;/li&gt;
&lt;li&gt;Z vzporednim alg. na EREW PRAM (z &lt;span class="math"&gt;\(n\)&lt;/span&gt; PE) se da problem rešiti v času &lt;span class="math"&gt;\(O(log(n))\)&lt;/span&gt;, tudi za izrojena drevesa. Se prevede na problem računanja rangov.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="ocenjevanje-zmogljivosti-vzporednih-algoritmov"&gt;Ocenjevanje zmogljivosti vzporednih algoritmov&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-04-07&lt;/p&gt;
&lt;p&gt;Zanimala nas bo cena, delo, pohitritev in učinkovitost vzporednih algoritmov.&lt;/p&gt;
&lt;p&gt;Naj bo dan problem &lt;span class="math"&gt;\(P\)&lt;/span&gt; in njegov primerek velikosti &lt;span class="math"&gt;\(n\)&lt;/span&gt; (npr. uredi &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;span class="math"&gt;\(T_{seq}(n)\)&lt;/span&gt; označuje časovno zahtevnost najboljšega zaporednega algoritma za reševanje problema &lt;span class="math"&gt;\(P\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Naj bo dan vzporedni algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; za reševanje problema &lt;span class="math"&gt;\(P\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;span class="math"&gt;\(T_{par}(p, n)\)&lt;/span&gt; označuje časovno zahtevnost izvajanja algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; za reševanje problema &lt;span class="math"&gt;\(P\)&lt;/span&gt; na &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjih v PRAM modelu.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;em&gt;Cena&lt;/em&gt; (ang. cost) vzporednega algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; je &lt;span class="math"&gt;\(C_p(n) = p \cdot T_{par}(p, n)\)&lt;/span&gt;. Celoten porabljan čas, tako za računanje kot čakanje.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;em&gt;Delo&lt;/em&gt; (ang. work) vzporednega algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; je &lt;span class="math"&gt;\(W_p(n)\)&lt;/span&gt;. To je &lt;em&gt;število vseh operacij&lt;/em&gt;, ki so jih dejansko opravile vse procesne enote pri izvajanju algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
W_p(n) \leq C_p(n)
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Cena bo minimalna, kadar vsi procesorji končajo hkrati (in vsi delajo hkrati).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;em&gt;Pohitritev&lt;/em&gt; (ang. speedup) vzporednega algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; je &lt;span class="math"&gt;\(S_p(n) = \frac{T_{seq}(n)}{T_{par}(p, n)}\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: &lt;em&gt;Učinkovitost&lt;/em&gt; (ang. efficiency) vzporednega algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; je &lt;span class="math"&gt;\(E_p(n) = \frac{S_p(n)}{p}\)&lt;/span&gt; (koliko pohitritve nam je v povprečju prinesel posamezen procesor). Če vstavimo formulo za pohitritev, dobimo: &lt;span class="math"&gt;\(E_p(n) = \frac{T_{seq}(n)}{C_p(n)}\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Velja&lt;/strong&gt;: &lt;span class="math"&gt;\(S_p(n) \leq p\)&lt;/span&gt; (algoritem bo tekel največ &lt;span class="math"&gt;\(p\)&lt;/span&gt;-krat hitreje), &lt;span class="math"&gt;\(E_p(n) \leq 1\)&lt;/span&gt; (učinkovitost bo največ &lt;span class="math"&gt;\(1\)&lt;/span&gt;).&lt;/p&gt;
&lt;h3 id="izrek-o-enostavni-simulaciji"&gt;Izrek o enostavni simulaciji&lt;/h3&gt;
&lt;p&gt;Govori o izvajanju algoritma na manjšem vzporednem računalniku: Delovanje PRAMa lahko simuliramo z manjšim PRAMom (vzporedni algoritem lahko izvedemo tudi na manjšem vzporednem računalniku). Pri tem dosežemo zmanjšanje zmogljivosti algoritma, a poslabšanje je navzgor omejeno.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Izrek&lt;/strong&gt;: Naj bo &lt;span class="math"&gt;\(A\)&lt;/span&gt; vzporedni algoritem, ki se izvaja na PRAMu s &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorji &lt;span class="math"&gt;\(t\)&lt;/span&gt; časa. Manjši PRAM enakega tipa, ki ima &lt;span class="math"&gt;\(p' &amp;lt; p\)&lt;/span&gt; procesorjev, lahko izvede algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; v času &lt;span class="math"&gt;\(O(\frac{t \cdot p}{p'})\)&lt;/span&gt;. Cena algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; na manjšem PRAMu pa je &lt;strong&gt;kvečjemu dvakrat večja&lt;/strong&gt; od cene na večjem PRAMu:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
C_{p'} \leq 2C_p
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dokaz&lt;/em&gt;: Vsak korak algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; lahko manjši PRAM izvede kvečjemu v &lt;span class="math"&gt;\(\lceil\frac{p}{p'}\rceil\)&lt;/span&gt; časa.&lt;/p&gt;
&lt;p&gt;Manjši PRAM izvede cel algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; v času &lt;span class="math"&gt;\(t' = t \lceil\frac{p}{p'}\rceil = O(\frac{t \cdot p}{p'})\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Cena: &lt;span class="math"&gt;\(C_{p'} = p' \cdot t' = p' \cdot t \lceil\frac{p}{p'}\rceil \leq p' \cdot t \cdot (\frac{p}{p'} + 1) = tp + tp' \leq tp + tp = 2tp = 2C_p\)&lt;/span&gt; (manj procesorjev, večji čas).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vprašanje&lt;/em&gt;: Kaj, če je &lt;span class="math"&gt;\(p' = 1\)&lt;/span&gt;? To je še PRAM, toda z enim procesorjem, kar pa je že model RAM za &lt;em&gt;zaporedno računanje&lt;/em&gt;. Iz izreka sledi (ker je &lt;span class="math"&gt;\(p' = 1\)&lt;/span&gt;):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; lahko simuliramo na zaporednem modelu v času &lt;span class="math"&gt;\(O(tp)\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;Zaradi izreka je &lt;span class="math"&gt;\(C_1 \leq 2C_p\)&lt;/span&gt; in po drugi strani je &lt;span class="math"&gt;\(C_1 = 1 \cdot T_{par}(1, n) \geq T_{seq}(n)\)&lt;/span&gt;, zato: &lt;span class="math"&gt;\(\frac{1}{2}T_{seq}(n) \leq C_p(n)\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;Iz tega sledi: &lt;span class="math"&gt;\(C_p(n) = \Omega(T_{seq}(n))\)&lt;/span&gt; (cena vzporednega algoritma je &lt;strong&gt;navzdol omejena&lt;/strong&gt; z najboljšo časovno zahtevnostjo zaporednega algoritma).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Denimo, da je &lt;span class="math"&gt;\(C_p(n) = \Theta(T_{seq}(n))\)&lt;/span&gt;. Ker je &lt;span class="math"&gt;\(C_p(n)\)&lt;/span&gt; po definiciji enaka &lt;span class="math"&gt;\(p \cdot T_{par}(p,n)\)&lt;/span&gt;, sledi da je: &lt;span class="math"&gt;\(p \cdot T_{par}(p,n) = \Theta(T_{seq}(n))\)&lt;/span&gt;. Torej je: &lt;span class="math"&gt;\(T_{par}(p,n) = \Theta(\frac{T_{seq}(n)}{p})\)&lt;/span&gt;. Algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; na &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjih bi bil &lt;span class="math"&gt;\(p\)&lt;/span&gt;-krat hitrejši od najhitrejšega zaporednega algoritma. Očitno &lt;span class="math"&gt;\(A\)&lt;/span&gt; izkorišča vseh &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjev ves čas reševanja problema in reši problem v najmanjšem možnem času.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Posledica&lt;/strong&gt; (in definicija): Vzporedni algoritem &lt;strong&gt;je učinkovit&lt;/strong&gt;, če zanj velja:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
C_p(n) = \Theta(T_{seq}(n))
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Koliko procesorjev rabimo?&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
p = \Theta(\frac{T_{seq}(n)}{T_{par}(p, n)}) = \Theta(S_p(n))
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Velja tudi &lt;span class="math"&gt;\(S_p(n) = \Theta(p)\)&lt;/span&gt; (če obrnemo). Pri učinkovitem vzporednem algoritmu je pohitritev približno &lt;span class="math"&gt;\(p\)&lt;/span&gt;-kratna.&lt;/p&gt;
&lt;h3 id="diskusija"&gt;Diskusija&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Intuicija&lt;/em&gt;: Več procesorjev reši problem v krajšem času.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Teorija&lt;/em&gt;: Če je vzporedni algoritem učinkovit, potem &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjev reši problem &lt;span class="math"&gt;\(p\)&lt;/span&gt;-krat hitreje kot najboljši zaporedni algoritem.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Praksa&lt;/em&gt;: Izkaže se, da ko &lt;span class="math"&gt;\(p\)&lt;/span&gt; preseže neko mejno vrednost (ta je odvisna od problema in algoritma), potem &lt;span class="math"&gt;\(S_p(n)\)&lt;/span&gt; ni več linearna funkcija od &lt;span class="math"&gt;\(p\)&lt;/span&gt;. Lahko dodajamo procesorje, vendar pohitritev ne bo linearna, zato učinkovitost pada (vsak procesor prispeva k pospešku manj, kot bi lahko).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kaj je vzrok za upad pohitritve? Običajno je vzrok komunikacija med procesorji.&lt;/p&gt;
&lt;h3 id="brentov-izrek"&gt;Brentov izrek&lt;/h3&gt;
&lt;p&gt;Vzpostavlja povezavo med št. procesorjem in št. operacij, ki so bile izvedene.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Izrek&lt;/strong&gt;: Naj bo &lt;span class="math"&gt;\(A\)&lt;/span&gt; vzporedni algoritem, ki se izvaja na nekem PRAMu z neomejenim št. procesorjev &lt;span class="math"&gt;\(t\)&lt;/span&gt; časa in pri tem izvede &lt;span class="math"&gt;\(m\)&lt;/span&gt; operacij. Algoritem &lt;span class="math"&gt;\(A\)&lt;/span&gt; se lahko izvede na PRAMu enakega tipa in na &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjih v času &lt;span class="math"&gt;\(t'\)&lt;/span&gt;:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
t' = O(\frac{m}{p} + t)
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dokaz&lt;/em&gt;: Recimo, da &lt;span class="math"&gt;\(A\)&lt;/span&gt; na prvem PRAMu v &lt;span class="math"&gt;\(i\)&lt;/span&gt;-tem koraku (&lt;span class="math"&gt;\(i\)&lt;/span&gt; gre od 1 do &lt;span class="math"&gt;\(t\)&lt;/span&gt;) izvede &lt;span class="math"&gt;\(m(i)\)&lt;/span&gt; operacij. Torej vseh operacij je &lt;span class="math"&gt;\(\sum_{i=1}^t m(i) = m\)&lt;/span&gt;. Drugi PRAM &lt;span class="math"&gt;\(i\)&lt;/span&gt;-ti korak algoritma &lt;span class="math"&gt;\(A\)&lt;/span&gt; lahko izvede v daljšem času (ker ima na voljo omejeno število procesorjev)—v &lt;span class="math"&gt;\(\lceil\frac{m(i)}{p}\rceil\)&lt;/span&gt; korakih. Vseh korakov na drugem PRAMu je zato:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
\sum_{i=1}^{t} \lceil\frac{m(i)}{p}\rceil \leq \sum_{i=1}^{t} ( \frac{m(i)}{p} + 1 ) = \frac{1}{p} \cdot \sum_{i=1}^{t} m(i) + \sum_{i=1}^{t} 1 = \frac{m}{p} + t
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Uporaba&lt;/em&gt;: V problemu so dana števila &lt;span class="math"&gt;\(x_1,\ldots,x_n\)&lt;/span&gt;, izračunaj maksimum teh števil na EREW PRAM.&lt;/p&gt;
&lt;p&gt;EREW PRAM bi računal max z drevesnim algoritmom:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;x1     x2     x3     x4    x5     x6    x7    x8
  \   /         \   /        \   /       \   /
   [ ]           [ ]          [ ]         [ ]
    \...
...
4 procesorji na 1. nivoju
2 na 2. nivoju
1 na zadnjem nivoju&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Če ima PRAM na voljo vsaj &lt;span class="math"&gt;\(\frac{n}{2}\)&lt;/span&gt; procesorjev (tj. &lt;span class="math"&gt;\(\Theta(n)\)&lt;/span&gt;), lahko izračuna maksimum v času &lt;span class="math"&gt;\(\Theta(\log{n})\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vprašanja&lt;/em&gt;:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Kaj pa če bi bilo na voljo le npr. &lt;span class="math"&gt;\(O(\sqrt{n})\)&lt;/span&gt; procesorjev, kako to vpliva na čas računanja?&lt;/li&gt;
&lt;li&gt;Za največ koliko lahko zmanjšamo št. procesorjev (asimptotično), da se bo drevesni algoritem še vedno izvedel v času &lt;span class="math"&gt;\(\Theta(\log{n})\)&lt;/span&gt;?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Uporabimo Brentov izrek:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Drevesni algoritem traja &lt;span class="math"&gt;\(t = \log{n}\)&lt;/span&gt; korakov na neomejenem PRAMu.&lt;/li&gt;
&lt;li&gt;Število vseh operacij, ki jih izvede, je &lt;span class="math"&gt;\(m = n - 1\)&lt;/span&gt; (število notranjih vozlišč v drevesu).&lt;/li&gt;
&lt;li&gt;Če je na voljo le &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorjev, lahko izvedejo drevesni algoritem v času &lt;span class="math"&gt;\(O(\frac{n-1}{p} + \log{n})\)&lt;/span&gt; (po Brentu).&lt;/li&gt;
&lt;li&gt;Odgovor na 1. vprašanje: če bi bil &lt;span class="math"&gt;\(p=\Theta(\sqrt{n})\)&lt;/span&gt;, potem bi bil čas algoritma &lt;span class="math"&gt;\(t = O(\frac{n-1}{k \cdot \sqrt{n}} + \log{n})\)&lt;/span&gt;, kar je &lt;span class="math"&gt;\(O(\sqrt{n} + \log{n})\)&lt;/span&gt; (&lt;span class="math"&gt;\(k\)&lt;/span&gt; nesemo ven, delimo s &lt;span class="math"&gt;\(\sqrt{n}\)&lt;/span&gt;, zanemarimo &lt;span class="math"&gt;\(\frac{1}{\sqrt{n}}\)&lt;/span&gt;), kar je &lt;span class="math"&gt;\(O(\sqrt{n})\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;Odgovor na 2. vprašanje: če za &lt;span class="math"&gt;\(p\)&lt;/span&gt; vzamemo &lt;span class="math"&gt;\(p=\Theta(\frac{n-1}{\log{n}})\)&lt;/span&gt; procesorjev, potem bi drevesni algoritem tekel v času &lt;span class="math"&gt;\(O(\frac{n-1}{\frac{n-1}{\log{n}}} + \log{n}) = O(\log{n})\)&lt;/span&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Npr.: Če imamo 1024 števil (&lt;span class="math"&gt;\(n=1024\)&lt;/span&gt;). Na običajen način bi rabili &lt;span class="math"&gt;\(\frac{n}{2} = 512\)&lt;/span&gt; procesorjev. Lahko pa jih vzamemo 100 in bomo algoritem lahko izvedli v času, ki je primerljiv s prejšnjim.&lt;/p&gt;
&lt;h2 id="primerjava-modelov-pram"&gt;Primerjava modelov PRAM&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-04-14&lt;/p&gt;
&lt;p&gt;Gre za &lt;em&gt;izreke o razlikovanju modelov PRAM&lt;/em&gt; (ang. model separation theorems).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vprašanje&lt;/em&gt;: Ali se CRCW, CREW in EREW PRAM modeli res razlikujejo po svoji “moči”? Če se, za koliko?&lt;/p&gt;
&lt;p&gt;Motivacija: Lažje je sestaviti paralelni algoritem za CRCW kot pa za EREW. Ali je povečanje časa pri prehodu iz CRCW na EREW lahko poljubno veliko?&lt;/p&gt;
&lt;p&gt;Kako to ugotovimo? &lt;em&gt;Če&lt;/em&gt; se ti modeli res razlikujejo, po svoji “moči”, potem &lt;em&gt;morata&lt;/em&gt; obstajati dva problema, &lt;span class="math"&gt;\(P_1\)&lt;/span&gt; in &lt;span class="math"&gt;\(P_2\)&lt;/span&gt;, na katerih se pokaže razlika v njihovi “moči”.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
CRCW \supset CREW \supset EREW
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Posamezna množica predstavlja probleme, ki jih je posamezen model sposoben rešiti v danem času (asimptotično gledano). CRCW je sposoben rešiti največ problemov v danem času, CREW malo manj, EREW pa najmanj. Domnevamo, da so ti razredi strogo vsebovani en v drugem.&lt;/p&gt;
&lt;p&gt;Ta dva problema, ki ju iščemo, sta med EREW in CRCW. &lt;span class="math"&gt;\(P_1\)&lt;/span&gt; je v CRCW, &lt;span class="math"&gt;\(P_2\)&lt;/span&gt; pa v CREW.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(P_1\)&lt;/span&gt; reši CRCW v nekem času &lt;span class="math"&gt;\(T(n)\)&lt;/span&gt;, v katerem ga CREW ne more&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(P_2\)&lt;/span&gt; reši CREW v nekem času &lt;span class="math"&gt;\(T'(n)\)&lt;/span&gt;, v katerem ga EREW ne more&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kako najdemo taka problema, če sploh obstajata?&lt;/p&gt;
&lt;h3 id="odnos-crcw-crew-patološki-problem-p_1"&gt;Odnos CRCW-CREW (patološki problem &lt;span class="math"&gt;\(P_1\)&lt;/span&gt;)&lt;/h3&gt;
&lt;p&gt;Ali obstaja problem &lt;span class="math"&gt;\(P_1\)&lt;/span&gt;, ki ga z enakim št. procesorjev CRCW reši asimptotično hitreje kot katerikoli CREW?&lt;/p&gt;
&lt;p&gt;Da. &lt;span class="math"&gt;\(P_1\)&lt;/span&gt; je problem &lt;strong&gt;iskanje največjega števila&lt;/strong&gt; med danimi &lt;span class="math"&gt;\(n\)&lt;/span&gt; števili (&lt;span class="math"&gt;\(\mbox{max} \{A[1], A[2], \ldots, A[n]\}\)&lt;/span&gt;).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trditev&lt;/strong&gt;: CRCW PRAM (z &lt;span class="math"&gt;\(n^2\)&lt;/span&gt; PE) reši &lt;span class="math"&gt;\(P_1\)&lt;/span&gt; v času &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt; (če damo zadosti procesorjev, reši to v konstantnem času).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dokaz&lt;/em&gt;: Konstrukcija algoritma:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;IzracunajMax(A, n):
  # m[i] bo true, dokler se ne najde število, ki je večje od A[i]
  forall i in {1,...,n} inparallel do:
    m[i] := true

  # za vse različne pare števil i,j:
  #   če najdemo nek A[j], ki je večji od A[i], potem A[i] ne more biti maksimalen element
  forall i,j in {1,...,n}, i!=j inparallel do:
    if A[i] &amp;lt; A[j]:
      m[i] := false

  forall i in {1,...,n} inparallel do:
    if m[i] == true:
      max := A[i]

  return max&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Na CRCW (z &lt;span class="math"&gt;\(n^2\)&lt;/span&gt; PE) algoritem zahteva konstanten čas (&lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;).&lt;/p&gt;
&lt;p&gt;Kaj pa CREW? Ta ne dopušča sočasnega vpisovanja iz večih procesorjev. Tega CREW ne zmore v enem koraku, ker ne sme izpisovati več podatkov v isto lokacijo. CREW lahko to izvede v času, ki je kvečjemu &lt;span class="math"&gt;\(O(\log{n})\)&lt;/span&gt; (z zapisovanjem z drevesi).&lt;/p&gt;
&lt;p&gt;Pri prehodu iz CRCW na CREW pridobimo faktor &lt;span class="math"&gt;\(\log{n}\)&lt;/span&gt;.&lt;/p&gt;
&lt;h3 id="odnos-crew-erew-patološki-problem-p_2"&gt;Odnos CREW-EREW (patološki problem &lt;span class="math"&gt;\(P_2\)&lt;/span&gt;)&lt;/h3&gt;
&lt;p&gt;Ali obstaja problem &lt;span class="math"&gt;\(P_2\)&lt;/span&gt;, ki ga z enakim št. procesorjev CREW reši asimptotično hitreje kot katerikoli EREW?&lt;/p&gt;
&lt;p&gt;Da. &lt;span class="math"&gt;\(P_2\)&lt;/span&gt; je problem &lt;strong&gt;pripadnost množici&lt;/strong&gt; (ang. membership problem): Ali je dani &lt;span class="math"&gt;\(x\)&lt;/span&gt; element dane množice &lt;span class="math"&gt;\(\{x_1, \ldots, x_n\}\)&lt;/span&gt;? Sprašujemo, ali je &lt;span class="math"&gt;\(x\)&lt;/span&gt; v množici.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trditev&lt;/strong&gt;: CREW PRAM (z &lt;span class="math"&gt;\(n\)&lt;/span&gt; PE) lahko reši &lt;span class="math"&gt;\(P_2\)&lt;/span&gt; v času &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dokaz&lt;/em&gt;: Algoritem intuitivno dokažemo:&lt;/p&gt;
&lt;p&gt;Vsakemu elementu &lt;span class="math"&gt;\(x_i\)&lt;/span&gt; je dodeljen svoj PE (označimo ga s &lt;span class="math"&gt;\(p_i\)&lt;/span&gt;). Dana je spremenljivka &lt;span class="math"&gt;\(rez\)&lt;/span&gt;, ki bo na koncu 1, če bomo za &lt;span class="math"&gt;\(x\)&lt;/span&gt; ugotovili, da pripada množici, oz. 0 sicer. Na začetku (ob inicializaciji) nastavimo &lt;span class="math"&gt;\(rez\)&lt;/span&gt; na 0. Vsak &lt;span class="math"&gt;\(p_i\)&lt;/span&gt; istočasno prebere &lt;span class="math"&gt;\(x\)&lt;/span&gt; (hkrati to lahko naredijo v CREW v času &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;) in to primerja s svojim &lt;span class="math"&gt;\(x_i\)&lt;/span&gt;. Če je rezultat primerjanja pozitiven (&lt;span class="math"&gt;\(x=x_i\)&lt;/span&gt;), potem PE vpiše v rezultat &lt;code&gt;rez = true&lt;/code&gt; (kvečjemu eden bo vpisal true v &lt;code&gt;rez&lt;/code&gt;—to je dobro, ker imamo CREW).&lt;/p&gt;
&lt;p&gt;Kaj pa EREW? Ne, v konstantnem času tega ne zmore. Potrebuje vsaj &lt;span class="math"&gt;\(\Omega(\log{n})\)&lt;/span&gt; korakov.&lt;/p&gt;
&lt;p&gt;Težava: Kako naj si PE priskrbijo vsak svojo kopijo števila &lt;span class="math"&gt;\(x\)&lt;/span&gt; v času &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt;?&lt;/p&gt;
&lt;p&gt;Kaj pa če bi bilo &lt;span class="math"&gt;\(n\)&lt;/span&gt; kopij že na voljo? Bi šlo, toda vstavljanje &lt;span class="math"&gt;\(n\)&lt;/span&gt; kopij zahteva &lt;span class="math"&gt;\(\lceil\log{n}\rceil\)&lt;/span&gt; korakov.&lt;/p&gt;
&lt;h3 id="izrek-o-simulaciji"&gt;Izrek o simulaciji&lt;/h3&gt;
&lt;p&gt;Našli smo &lt;span class="math"&gt;\(P_1\)&lt;/span&gt; in &lt;span class="math"&gt;\(P_2\)&lt;/span&gt;, ki ločita posamične modele PRAM za faktor vsaj &lt;span class="math"&gt;\(\Omega(\log{n})\)&lt;/span&gt;. Temu faktorju rečemo &lt;em&gt;ločevalni faktor&lt;/em&gt; (ang. separation factor).&lt;/p&gt;
&lt;p&gt;Če gremo iz CRCW na bolj realen model CREW, plačamo ceno vsaj &lt;span class="math"&gt;\(\Omega(\log{n})\)&lt;/span&gt;. Isto se zgodi pri prehodu iz CREW na EREW.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vprašanje&lt;/em&gt;: Ali obstajajo &lt;span class="math"&gt;\(P'_1 in P'_2\)&lt;/span&gt;, ki povzročita (imata) še večji ločevalni faktor? Torej, ali se modeli CRCW, CREW, EREW razlikujejo za &lt;em&gt;poljubno&lt;/em&gt; velik razločevalni faktor? Ne. Razločevalni faktor je navzgor omejen, kar je dobro. Zgornja meja je kvečjemu &lt;span class="math"&gt;\(O(\log{n})\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Modeli se med seboj &lt;strong&gt;razlikujejo natanko za &lt;span class="math"&gt;\(\log{n}\)&lt;/span&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Izrek&lt;/strong&gt;: Vsak vzporedni algoritem potrebuje na modelu CRCW PRAM (s &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorji) kvečjemu &lt;span class="math"&gt;\(O(\log{p})\)&lt;/span&gt;-krat manj časa kot ga rabi na modelu EREW PRAM (s &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorji).&lt;/p&gt;
&lt;p&gt;Praktična posledica tega izreka je:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Sestavi algoritem za CRCW PRAM (ker je lažje).&lt;/li&gt;
&lt;li&gt;Analiziraj ga in določi &lt;span class="math"&gt;\(T_{par}\)&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt;Ta algoritem bi tekel na EREW PRAM v času, ki je &lt;span class="math"&gt;\(T_{par} \cdot O(\log{p})\)&lt;/span&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="ponovitev"&gt;Ponovitev&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-05-05&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Vprašanje&lt;/em&gt;: Koliko je en model močnejši od drugega?&lt;/p&gt;
&lt;p&gt;Če tisto kar zmore rešiti najhitrejši med njimi (CRCW) v času &lt;span class="math"&gt;\(t\)&lt;/span&gt;, bo najpočasnejši (EREW) porabil &lt;span class="math"&gt;\(log(p) \cdot t\)&lt;/span&gt; časa oz. odvisen od logaritma števila procesorjev v najslabšem primeru. To je dobro, saj nas zanimajo poli-logaritmični algoritmi in zaradi tega ostajamo v istem prostoru.&lt;/p&gt;
&lt;h3 id="tehnika-vzporedno-urejanje-urejanje-na-pram"&gt;Tehnika: Vzporedno urejanje (urejanje na PRAM)&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Zaporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;uredi &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil&lt;/li&gt;
&lt;li&gt;uporabljamo tudi operacijo primerjanja (se da tudi brez, npr. Bucket sort pri APS)&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(T_{seq}(n) = \Theta(n \log n)\)&lt;/span&gt; (npr. Heap sort)&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(\Theta(n \log n)\)&lt;/span&gt; je tudi spodnja meja za urejanje (dokaz pri APS)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Vzporedni alg.&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v kolikšnem času in s koliko PE se da urediti &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil na vzporeden način?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Naj bo &lt;span class="math"&gt;\(S\)&lt;/span&gt; algoritem za vzp. urejanje na PRAM s &lt;span class="math"&gt;\(p\)&lt;/span&gt; procesorji.&lt;/p&gt;
&lt;p&gt;Vemo že &lt;span class="math"&gt;\(C_p(n) = \Omega(T_{seq}(n))\)&lt;/span&gt; (cena alg. &lt;span class="math"&gt;\(S\)&lt;/span&gt; je navzdol omejena s časom najhitrejšega zap. alg. za urejanje). Iz definicije sledi:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
p \cdot T_{par}(p,n) = \Omega(n \log n)
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Vzemimo &lt;span class="math"&gt;\(p = n\)&lt;/span&gt; in po deljenju z &lt;span class="math"&gt;\(n\)&lt;/span&gt;: &lt;span class="math"&gt;\(T_{par}(n,n) = \Omega(\log n)\)&lt;/span&gt;. Torej čas za vzp. urejanje &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil je &lt;em&gt;vsaj&lt;/em&gt; &lt;span class="math"&gt;\(\Theta(\log n)\)&lt;/span&gt;. Ne vemo, če se da doseči, torej morda tudi asimptotično več.&lt;/p&gt;
&lt;p&gt;Ali se da z &lt;span class="math"&gt;\(n\)&lt;/span&gt; PE urediti &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil prav v času &lt;span class="math"&gt;\(\Theta(\log n)\)&lt;/span&gt;? (Šteje le konstruktiven dokaz—t.j. konstrukcija takšnega algoritma.) Da. To je pokazal Richard Cole, 1986.&lt;/p&gt;
&lt;h4 id="primer-cole-ov-algoritem-skica"&gt;Primer: Cole-ov algoritem (skica)&lt;/h4&gt;
&lt;p&gt;Temelji na urejanju z zlivanjem (merge sort). Če je &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil bo potrebnih &lt;span class="math"&gt;\(\log n\)&lt;/span&gt; vzp. zlivanj.&lt;/p&gt;
&lt;p&gt;Primer: &lt;span class="math"&gt;\(\{1, 2, ..., 8\}\)&lt;/span&gt;&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/primer-coleov-algoritem.jpg"&gt;&lt;img alt="Primer Cole-ov algoritem" height="452" src="http://gw.tnode.com/student/img/primer-coleov-algoritem.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Primer Cole-ov algoritem&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Ker je vseh vzp. zlivanj &lt;span class="math"&gt;\(\log n\)&lt;/span&gt;, bo čas &lt;span class="math"&gt;\(T_{par}(n,n) = \Theta(\log n)\)&lt;/span&gt; dosežen le, če bo &lt;em&gt;vsako posamično vzp. zlivanje zahtevalo samo&lt;/em&gt; &lt;span class="math"&gt;\(O(1)\)&lt;/span&gt; časa. Kako? Izkoriščamo informacije od prejšnjih zlivanj (ranks, compare-exchange operations in hypercube).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Izrek&lt;/strong&gt; (R. Cole): Zaporedje &lt;span class="math"&gt;\(n\)&lt;/span&gt; števil lahko uredimo na PRAM CREW z &lt;span class="math"&gt;\(\Theta(n)\)&lt;/span&gt; PE v času &lt;span class="math"&gt;\(T_{par}(n,n) = \Theta(\log n)\)&lt;/span&gt;. (Na EREW PRAM pa &lt;span class="math"&gt;\(O(\log^2 n)\)&lt;/span&gt;.)&lt;/p&gt;
&lt;p&gt;Vendarle je v končni fazi/praksi vseeno smiselno pogledati čas izvajanja oz. konstante.&lt;/p&gt;
&lt;h2 id="osnovni-razred-vzp.-zahtevnosti-pomen-modela-pram"&gt;Osnovni razred vzp. zahtevnosti (pomen modela PRAM)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Def.&lt;/strong&gt;: Računski problem je v razredu NC, če je rešljiv na modelu PRAM z &lt;span class="math"&gt;\(O(n^k)\)&lt;/span&gt; procesorjev v času &lt;span class="math"&gt;\(O(\log^c n)\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Opomba: NC se po domače imenuje Nick’s Class (po Nick Pippenger).&lt;/p&gt;
&lt;p&gt;Zakaj takšen pogoj? Če je v zap. svetu “hiter” algoritem le tisti, ki je &lt;em&gt;polinomski&lt;/em&gt;, je v vzp. svetu “hiter” algoritem le tisti, ki je &lt;em&gt;polilogaritmičen&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Teoretično lahko konstruiramo vzp. algoritme, ki uporabljajo eksponentno mnogo procesorjev (včasih je tak algoritem &lt;em&gt;zelo hiter&lt;/em&gt;). Toda taki vzp. algoritmi v praksi tečejo bolj počasi kot trdi teorija! &lt;strong&gt;Vir težav&lt;/strong&gt; je 3-dimenzionalen prostor realnosti. Četudi bi zgradili vzp. računalnik z eksponentnim številom PE in te zgnetli na najmanjši možni prostor, se povezave in PEji ne morejo zgnesti na ta prostor ne da bi se vsaj ena povezava &lt;em&gt;bistveno podaljšala&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NC&lt;/strong&gt; = razred rač. problemov, ki so &lt;strong&gt;v praksi hitro rešljivi&lt;/strong&gt; na vzporeden način.&lt;/p&gt;
&lt;p&gt;Veliko vprašanje vzp. računanja: &lt;span class="math"&gt;\(P \stackrel{?}{=} NC\)&lt;/span&gt; (ali nam realna vzporednost bistveno kaj prinese)?&lt;/p&gt;
&lt;h3 id="povzetek"&gt;Povzetek&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;PRAM je (videti) nerealističen model (ker ima neomejen skupni pomnilnik, takojšen dostop do vsake celice, zanemarja čas komunikacije) (danes že poznamo vrsto bolj realističnih)&lt;/li&gt;
&lt;li&gt;kljub temu pa obstoj PRAM algoritma za dani problem:
&lt;ol type="1"&gt;
&lt;li&gt;pove nekaj o &lt;em&gt;inherentnih lastnostih problema&lt;/em&gt; (npr. o njegovi dovzetnosti za paralelizacijo) (npr. polinomski GCD se upira paralelizaciji in je NC-poln)&lt;/li&gt;
&lt;li&gt;lahko vodi do &lt;em&gt;bolj praktičnega/realnega algoritma&lt;/em&gt; (za vzp. računalnik z dano arhitekturo, ki običajno namreč ni polni graf)&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;li&gt;če nam ne uspe sestaviti za PRAM, ki je enostaven, je še manjša verjetnost da nam bo na kompleksnejšem modelu&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Splošno mnenje&lt;/em&gt;: Le PRAM algoritmi, ki imajo &lt;em&gt;optimalno ceno&lt;/em&gt; (tj. &lt;span class="math"&gt;\(C_p(n) = \Theta(T_{seq}(n))\)&lt;/span&gt;) bodo v praksi morda relevantni. Med temi pa nas najbolj zanimajo tisti, ki imajo &lt;em&gt;minimalen&lt;/em&gt; &lt;span class="math"&gt;\(T_{par}\)&lt;/span&gt;.&lt;/p&gt;
</summary><category term="student"></category></entry><entry><title>Large networks grow smaller: How to choose the right simplification method?</title><link href="http://gw.tnode.com/network-analysis/netsci2014-large-networks-grow-smaller/" rel="alternate"></link><updated>2015-01-05T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-02-28:network-analysis/netsci2014-large-networks-grow-smaller/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Conference NetSci'14 logo" height="120" src="http://gw.tnode.com/network-analysis/img/netsci2014-logo.png" width="457"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="conference-proceeding"&gt;Conference proceeding&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;N. Blagus, L. Šubelj, G. Weiss, and M. Bajec, “&lt;strong&gt;Large networks grow smaller: How to choose the right simplification method?&lt;/strong&gt;,” in Proc. of NetSci’14, 2014.&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; &lt;a href="http://netsci2014.net/"&gt;conference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014blagus-abstract.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014blagus-poster.pdf"&gt;poster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014blagus.bib"&gt;bibtex&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Network simplification proved as an effective tool for reducing large real-world networks and at the same time providing for sufficient fit of original network. However, even though a number of analyses have been performed observing the changes of networks under the simplification, broad understanding of the whole process remains only partial. The questions such as ‘How to compare original (i.e., complete) and simplified (i.e., incomplete) network?’, ‘What factors impact the effectiveness of simplification process?’, ‘What size of simplified network provides for the best fit of original network?’, ‘What simplification method to use?’ are far from solved in the literature. In our study, we analyze over 30 real-world networks of different size and origin (e.g., social, information, technological). We reduce networks with several simplification methods (e.g., random node and link selection, breadth-first sampling, merging based on balance-propagation) and observe the changes of several fundamental properties (e.g., degree distribution, clustering coefficient, degree mixing, and density) under simplification. We show that the reduction on about 10% of original network provides for adequate preservation of important properties. The best performing methods prove to be random node selection based on degree and breadth-first sampling. The results also show the size of simplified network influence the effectiveness of simplification method, while the size and type of original network do not. Besides basic properties, we explore also the changes of network structure under simplification. Particularly, we focus on different groups of nodes, commonly observed in real-world networks (e.g., communities, modules and mixtures of the two). In this case, the changes in the simplification effectiveness occurs among different types of networks. For example, simplified social networks exhibit even stronger community structure than original networks, while in simplified information networks the number of mixtures increases. However, in general, the proportion of nodes explained by the group structure enlarge in sampled networks, moreover the goodness of the preservation of node group structure does not depend on the choice of the simplification method. To summarize, the main advantage of our analysis is large number of networks considered. Therefore we provide for reliable results concerning the effectiveness of the simplification process and support a better understanding of the changes of networks under the simplification process. In our future work we intend to create a framework for adaptive simplification of real-world networks, which would suggest the best simplification method for a given network based on its properties and the further use of the simplified network.&lt;/p&gt;
</summary><category term="network analysis"></category><category term="conference"></category><category term="paper"></category><category term="poster"></category></entry><entry><title>What coins the bitcoin? Exploratory analysis of bitcoin market value by network group discovery</title><link href="http://gw.tnode.com/network-analysis/netsci2014-what-coins-the-bitcoin/" rel="alternate"></link><updated>2015-01-05T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-02-28:network-analysis/netsci2014-what-coins-the-bitcoin/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Conference NetSci'14 logo" height="120" src="http://gw.tnode.com/network-analysis/img/netsci2014-logo.png" width="457"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;h2 id="conference-proceeding"&gt;Conference proceeding&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;L. Šubelj, G. Weiss, N. Blagus, and M. Bajec, “&lt;strong&gt;What coins the bitcoin? Exploratory analysis of bitcoin market value by network group discovery&lt;/strong&gt;,” in Proc. of NetSci’14, 2014, vol. 105, no. 2008.&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; &lt;a href="http://netsci2014.net/"&gt;conference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-book"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014subelj-abstract.pdf"&gt;paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-picture-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014subelj-poster.pdf"&gt;poster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; &lt;a href="http://gw.tnode.com/network-analysis/f/netsci2014subelj.bib"&gt;bibtex&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Bitcoin is a peer-to-peer digital currencies that gained tremendous popularity in recent years. Daily market trade currently exceeds 100000 BTC (over 50 million USD) with 70000 processed transactions. It relies on a network of computers that solve complex mathematical problems as a part of a process that verifies and permanently records every transaction made. Unlike traditional currencies, no central authority governs the supply or has control over bitcoins. The bitcoin market value thus depends solely on people’s confidence or trust in it, similar to other real-world commodities and assets. Consequently, little is known about the trading behavior on exchange markets and their periodic or mutual dynamics. We thus here report the results of a preliminary exploratory analysis of bitcoin market value from a popular exchange market BitStamp. We collect the data for a period of five days in January 2014 at a rate of about one minute and construct different network representation of the time series. In particular, we consider proximity networks including cycle and correlation networks, transition networks and visibility graphs. Cycle network has a grid-like structure with uniform degree distribution and high clustering. On the contrary, the transition network reveals very sparse topology with bell-shaped degree distribution and no pronounced degree correlations. Correlation network and visibility graph are scale-free and small-world with assortative mixing by degree, otherwise a characteristic of social networks. Next, we apply a node group discovery approach to constructed networks in order to gain an insight into the their structure, and to reveal possible periodic patterns and other dynamics of the time series. In fact, complex networks methods provide knowledge that is complementary to that of the standard approaches of time series analysis. We adopt group detection framework that can detect densely linked groups of nodes known as communities, groups of structurally equivalent nodes denoted modules, and different mixtures of these, with core/periphery and hub &amp;amp; spokes structures as special cases. According to the above, cycle network is too dense to reveal any clear structure. On the other hand, most significant groups in transition network are communities with high modularity that represent the quantiles occupied by the bitcoin market value for some period in time (6 hours). The latter could be used for concept drift detection, which is a difficult problem in stream mining. Groups in correlation network are core/periphery-like structures with negative modularity, where dense core correspond to periods in time with consistent bitcoin behavior and periphery are notable shifts in the value that appear rather periodically. Whether this could be adopted for market price prediction remains unclear. Finally, groups in the visibility graph show similar characteristics with periphery corresponding to local extremes in the value that dominate a certain period of time. The above network representations can also model multidimensional times series, which enables the analysis of bitcoin market value and trade from several exchange markets simultaneously. Since the value can differ substantively across the markets, predicting the future fluctuations at one market from the dynamics of another could be of considerable practical value.&lt;/p&gt;
</summary><category term="network analysis"></category><category term="conference"></category><category term="paper"></category><category term="poster"></category></entry><entry><title>[UI-part1] Ensemble methods for data analytics</title><link href="http://gw.tnode.com/student/ui-part1/" rel="alternate"></link><updated>2014-03-26T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-02-26:student/ui-part1/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=81"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=81&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: prof. Marko Robnik-Šikonja&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: English&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-02-26&lt;/p&gt;
&lt;p&gt;Lecturers (3 parts):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;prof. Marko Robnik Šikonja - Ensemble methods for data analytics&lt;/li&gt;
&lt;li&gt;akad. prof. dr. Ivan Bratko - Learning in logic, ILP&lt;/li&gt;
&lt;li&gt;izr. prof. dr. Zoran Bosnić - Incremental learning from data streams&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each part lasts 5 weeks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 weeks of lectures (~4 hours)&lt;/li&gt;
&lt;li&gt;2 weeks of lab. exercises (~4 hours)&lt;/li&gt;
&lt;li&gt;1 week for completing and delivering the seminar work&lt;/li&gt;
&lt;li&gt;individual consultations (~25 hours of individual work)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each part is to be completed with a grade at least 50%. The final grade is the average grade of all parts.&lt;/p&gt;
&lt;p&gt;Course overview (part 1):&lt;/p&gt;
&lt;p&gt;Ensemble methods are one of the most successful and general data mining techniques. The block presents relevant theory and practical approaches necessary to tailor ensemble methods to specific new tasks. Each student solves an individual assignment focused on problems from her/his research agenda.&lt;/p&gt;
&lt;h2 id="ensemble-methods-for-data-analytics"&gt;Ensemble methods for data analytics&lt;/h2&gt;
&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Ensemble methods are one of the most successful and general data mining techniques. The block presents relevant theory and practical approaches necessary to tailor ensemble methods to specific new tasks. Each student solves an individual assignment focused on problems from her/his research agenda.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;R open source statistical system&lt;/li&gt;
&lt;li&gt;Some exercises published (&lt;code&gt;exercise.txt&lt;/code&gt;) for getting used to (source code for random forest is provided)&lt;/li&gt;
&lt;li&gt;This years assignment (&lt;code&gt;assignment.txt&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Instructions for project&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Read at least all the abstracts from the Collection of papers on Spletna ucilnica. Also contains two highly recommended books (Hastie, Tibshirani, Friedman: “The elements of statistical learning”; Gareth et al.: “An Introduction to Statistical Learning with Applications in R”; another good one is R. E. Shepire, Y. Freund: “Boosting: Foundation of Algorithms”, 2012).&lt;/p&gt;
&lt;p&gt;Grading (you need to get at least 50%):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;project (ensemble learning in connection with your research agenda) [60%] - next week individually&lt;/li&gt;
&lt;li&gt;presentation of project [20%] - 26.3.2014, 10 min presentation, submit report till 26.3.2014 8:00, 3-4 pages (problem description, data, related work, ensemble methods used)&lt;/li&gt;
&lt;li&gt;smaller programming assignment [20%]&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="general-scheme-for-ensemble-learning-el"&gt;General scheme for ensemble learning (EL)&lt;/h3&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;construct &lt;span class="math"&gt;\(T\)&lt;/span&gt; nonidentical learners&lt;/li&gt;
&lt;li&gt;combine their predictions (eg. by simple voting)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The learners should be nonidentical in the sense of predictions and at least weak learners (&lt;span class="math"&gt;\(\epsilon &amp;gt; 0\)&lt;/span&gt; better than noise).&lt;/p&gt;
&lt;p&gt;Approaches covering specific variations (left are options that can be varyied and right algorithms doing that):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;parameters&lt;/em&gt;: stacking&lt;/li&gt;
&lt;li&gt;&lt;em&gt;data instances&lt;/em&gt;: bagging, boosting&lt;/li&gt;
&lt;li&gt;&lt;em&gt;data attributes&lt;/em&gt;: random forests&lt;/li&gt;
&lt;li&gt;&lt;em&gt;learning algorithms&lt;/em&gt;: stacking&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
D = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;In real world problems the true generating model is unknown (&lt;span class="math"&gt;\(y = f(x) + \epsilon\)&lt;/span&gt;, &lt;span class="math"&gt;\(\text{E}[\epsilon] = 0\)&lt;/span&gt;).&lt;/p&gt;
&lt;p&gt;With a learning algorithm we than train a model &lt;span class="math"&gt;\(g(x|D)\)&lt;/span&gt; to approximate &lt;span class="math"&gt;\(f(x)\)&lt;/span&gt;. One way to calculate the error is mean square error:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
MSE = \frac{1}{n} \sum_{i=1}^{n} (y_i - g(x_i|D))^2
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;If we try to compute the expected error:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
\text{E}[MSE] = \frac{1}{n} \sum_{i=1}^{n} \text{E}[(y_i - g_i)^2], g_i = g(x_i|D)
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Bias variance decomposition can be used to decompose this expectation:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(\text{E}[(y_i - g_i)^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{E}[(y_i - f_i + f_i - g_i)^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{E}[(y_i - f_i)^2] + \text{E}[(f_i - g_i)^2] + 2 * \text{E}[(f_i - g_i)*(y_i - f_i)]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{E}[\epsilon^2] + \text{E}[(f_i - g_i)^2] + 2 * (\text{E}[f_i*y_i] - \text{E}[f_i^2] - \text{E}[g_i*y_i] + \text{E}[g_i*f_i])\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(.. \text{E}[\epsilon^2] = 0\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(.. \text{E}[(f_i - g_i)^2] = \text{E}[f_i^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(.. \text{E}[f_i*y_i] = \text{E}[g_i*(f_i + \epsilon)] = \text{E}[g_i*f_i + g_i*\epsilon] = \text{E}[g_i*f_i] + 0\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{E}[\epsilon^2] + \text{E}[(f_i - g_i)^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(.. \text{E}[(f_i - g_i)^2] = \text{E}[(f_i - \text{E}[g_i])^2] + \text{E}[(\text{E}[g_i] - g_i)^2] + 0\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(.. \text{E}[a*X] = a*\text{E}[X]\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(\text{E}[(y_i - g_i)^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{E}[\epsilon^2] + \text{E}[(f_i - \text{E}[g_i])^2] + \text{E}[(\text{E}[g_i] - g_i)^2]\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \text{Var}[\epsilon] + (bias)^2 + \text{Var}[g_i]\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;To improve our model we can therefore improve the bias or variance of our model (variance &lt;span class="math"&gt;\(\epsilon\)&lt;/span&gt; in original data can not be improved). With ensemble learning we mostly try to reduce the variance. Different models have different biases (eg. decision trees work on rectangles in problem space). By combining different models you are also solving the bias problem, because averaged decision models are not restricted to biases anymore.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Error due to Bias&lt;/em&gt;: The error due to bias is taken as the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict. Of course you only have one model so talking about expected or average prediction values might seem a little strange. However, imagine you could repeat the whole model building process more than once: each time you gather new data and run a new analysis creating a new model. Due to randomness in the underlying data sets, the resulting models will have a range of predictions. Bias measures how far off in general these models’ predictions are from the correct value.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Error due to Variance&lt;/em&gt;: The error due to variance is taken as the variability of a model prediction for a given data point. Again, imagine you can repeat the entire model building process multiple times. The variance is how much the predictions for a given point vary between different realizations of the model.&lt;/p&gt;
&lt;footer&gt;
&lt;cite&gt;&lt;a href="http://scott.fortmann-roe.com/docs/BiasVariance.html"&gt;Scott Fortmann-Roe&lt;/a&gt;&lt;/cite&gt;
&lt;/footer&gt;
&lt;/blockquote&gt;
&lt;h3 id="kinds-of-ensemble-models"&gt;Kinds of ensemble models&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Sequential ensembles&lt;/strong&gt; – eg. boosting (more control over the learning process, internally measure the error and focus on problematic instances; danger is over-fitting):&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;build a model&lt;/li&gt;
&lt;li&gt;estimate error&lt;/li&gt;
&lt;li&gt;reweigh the data&lt;/li&gt;
&lt;li&gt;(repeat)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Parallel ensembles&lt;/strong&gt; – eg. bagging, random forest (no estimation of error in between the process, parallelization; danger may be that you don’t get all special cases):&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;build lots of independent models&lt;/li&gt;
&lt;li&gt;combine&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="parallel-ensembles"&gt;Parallel ensembles&lt;/h2&gt;
&lt;p&gt;All are learned in parallel and their results are combined by voting.&lt;/p&gt;
&lt;h3 id="algorithm-bagging-leo-breiman-1996"&gt;Algorithm: Bagging (Leo Breiman, 1996)&lt;/h3&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset &lt;span class="math"&gt;\(D = \{(x_1, y_1), ..., (x_n, y_n)\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;base learning algorithm &lt;span class="math"&gt;\(L\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;number of base models &lt;span class="math"&gt;\(T\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for(t in 1..T)
  h_t = L(D, D_bs)  // D_bs is bootstrap sample
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(bootstrap sampling from &lt;span class="math"&gt;\(n\)&lt;/span&gt; items = select &lt;span class="math"&gt;\(n\)&lt;/span&gt; items with replacement)&lt;/p&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(H(x) = \operatorname*{arg\,max}_y \sum_{t=1}^{T} I(h_t(x) = y)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;where &lt;span class="math"&gt;\(y\)&lt;/span&gt; is element from a set of classes &lt;span class="math"&gt;\(Y\)&lt;/span&gt;, and: &lt;span class="math"&gt;\[
  I(statement) = \begin{cases}
1 &amp;amp; \text{if } statement = true \\
0 &amp;amp; \text{else}
  \end{cases}
  \]&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Usually the learning algorithm is a tree model learner, because they seem to be the most unstable (can produce different trees), simple, and fast learner. Final classification is the one selected most of the time by voting.&lt;/p&gt;
&lt;p&gt;Why it works? Estimate error of whole ensemble.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(Y = \{-1, 1\}\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(H(x) = sign( \sum_{i=1}^{T} h_i(x) )\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(P(h_i(x) \neq f(x)) = \epsilon\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= \sum_{k=0}^{\frac{T}{2}} \binom{T}{K}*(1-\epsilon)^k*\epsilon^{T-k} \leq e^{-\frac{1}{2}*T*(2*\epsilon-1)^2}\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Hoeffling inequality states:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(x_1, x_2, ..., x_n\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(\overline{X} = \frac{1}{n} * (x_1 + ... + x_n)\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(P(x_i \in [a_i, b_i]) = 1\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(P(\overline{X} - E[\overline{X}] \geq t) \leq e^{(-2*n^2*t^2) / \sum_{i=1}^{n} (b_i-a_i)^2}\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;One sample: &lt;span class="math"&gt;\((1-\frac{1}{e})*n\)&lt;/span&gt; different instances on average, using 63,2% of instances per model, ~37% of instances are not used in one learner (out-of-bag samples). Out-of-bag samples can be used for estimating the performance/error or controlling the learning process.&lt;/p&gt;
&lt;h3 id="algorithm-random-forests"&gt;Algorithm: Random forests&lt;/h3&gt;
&lt;p&gt;People tried to increase the instability of learners to produce different models and random forests were invented. Algorithm is pretty similar to bagging, but the learning algorithm is the random tree learner L, and each execution of it is also given a different subset of attributes.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;randomTree(instance):
  if num. of instances &amp;lt;= threshold:
    return leafNode
  select best attribute A from the random subset of size k
  split data according to A into left and right subset
  leftB = randomTree(leftSubset)
  rightB = randomTree(rightSubset)
  return tree&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Threshold is typically &lt;span class="math"&gt;\(1\)&lt;/span&gt;, so complete and random trees are built. For &lt;span class="math"&gt;\(k = \log_2{(\text{num. of attrs})}\)&lt;/span&gt; or &lt;span class="math"&gt;\(k = \sqrt{(\text{num. of attrs})}\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;We get many different trees, each one produces a vote and we sum together the votes. But they are mostly incomprehensible.&lt;/p&gt;
&lt;p&gt;Additional information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;importance of attributes (for each tree we know used attributes and their oob set (out-of-bag samples), we put oob instances down the tree, we check what effect it has if we change or ignore an attribute, can also find conditional dependencies between attributes)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Margin is the distance between the classification condition and nearest instance, we want to maximize it (SVM does this explicitly). For random trees it is better to estimate the margin than classification errors.&lt;/p&gt;
&lt;p&gt;Margin of random forest:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(mr(x, y)\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= P(H(x)=y) - \max_{j \in C, j \neq y} P(H(x)=j)\)&lt;/span&gt;&lt;br/&gt;&lt;span class="math"&gt;\(= (\text{probability of correct class}) - (\text{probability of maximal incorrect class})\)&lt;/span&gt;&lt;/p&gt;
&lt;h3 id="next-time"&gt;Next time&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;bring laptops&lt;/li&gt;
&lt;li&gt;1 hour trying to solve exercises&lt;/li&gt;
&lt;li&gt;in the mean time individual consultations&lt;/li&gt;
&lt;li&gt;try to figure out how ensembles can be used in your problem&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="review"&gt;Review&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-05&lt;/p&gt;
&lt;p&gt;Random forest:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;different configurations&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(n\)&lt;/span&gt; instances with replacement&lt;/li&gt;
&lt;li&gt;not selected are out-of-bag instances (~37%)&lt;/li&gt;
&lt;li&gt;additional randomness is introduced by selecting a subset of &lt;span class="math"&gt;\(log_2 a\)&lt;/span&gt; or &lt;span class="math"&gt;\(\sqrt{a}\)&lt;/span&gt; random sets in each node&lt;/li&gt;
&lt;li&gt;almost no parameters, just num. of trees and num. of attributes (&lt;span class="math"&gt;\(a\)&lt;/span&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="feature-evaluation"&gt;Feature evaluation&lt;/h3&gt;
&lt;p&gt;Feature evaluation and estimation of generalization performance can be performed by randomly scrambling attributes/features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;keep distribution&lt;/li&gt;
&lt;li&gt;if distribution performance is important than the classification performance (eg. accuracy, or better estimate the margins between classes) drops a lot, else the attribute in unimportant&lt;/li&gt;
&lt;li&gt;for each feature we get a score that is an importance of this feature&lt;/li&gt;
&lt;li&gt;we can compare it ReliefF&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="with-random-forest"&gt;With random forest&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;distance measure: classify all trees, we count how many times two instances came into the same leaf, count co-occurrences in leaves&lt;/li&gt;
&lt;li&gt;find outliers: compute distance to all other instances and the ones with largest distance is an outlier; we can represent is as a distance/similarity/proximity matrix between all pairs of instances, find a few largest distances (~0.5%, problem dependent), convert to distance values with &lt;span class="math"&gt;\(\sqrt{1 - \frac{\text{proximity}}{\text{num. of trees}}}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;multidimensional scaling: lower-dimensional projection of the data, identify clusters visually&lt;/li&gt;
&lt;li&gt;proximity matrix enables you to do clustering, gives you intrinsic similarity, can expose class fragmentation problem (eg. a disease can be caused by many factors, people with a disease can have different reasons), you can also see if clustering matches your classes&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="density-estimation"&gt;Density estimation&lt;/h3&gt;
&lt;p&gt;Probability density visualization in 1D tells you where your instances would be (eg. Gaussian). Higher dimensions can be visualized with heat maps or histograms. Gaussian representation in more than 5D dysfunctional.&lt;/p&gt;
&lt;h4 id="with-random-trees"&gt;With random trees&lt;/h4&gt;
&lt;p&gt;Random trees with random splits until you get a leaf (full trees with single instance in the leaf). Reasonably good statistics can be estimated based on average depth of instances in leaves. For each instance you approximately know in what dense region it appears.&lt;/p&gt;
&lt;h2 id="sequential-ensembles"&gt;Sequential ensembles&lt;/h2&gt;
&lt;p&gt;First a classifier &lt;span class="math"&gt;\(C_1\)&lt;/span&gt;, estimate error &lt;span class="math"&gt;\(e_1\)&lt;/span&gt;, next classifier &lt;span class="math"&gt;\(C_2\)&lt;/span&gt; corrects the problems of previous classifiers, new error &lt;span class="math"&gt;\(e_2\)&lt;/span&gt; is estimated, and so on… On the end their results are combined with weighted voting. Such approach is also capable of reducing the bias. Family of best is called &lt;em&gt;boosting&lt;/em&gt;.&lt;/p&gt;
&lt;h3 id="algorithm-adaboost"&gt;Algorithm: AdaBoost&lt;/h3&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset &lt;span class="math"&gt;\(D = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;base algorithm &lt;span class="math"&gt;\(L\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;number of training rounds &lt;span class="math"&gt;\(T\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;D_1(i) = 1 / n  // weight of instances
for i = 1 to T
  h_t = L(D, D_t)  // either directly using weights or sampling with D_f
  e_t = Pr(D_i; h_t != y_i)  // probability estimate hypothesis condition based on sample D_i
  a_t = 1/2 * ln((1 - e_t) / e_t)  // negative weight if bellow 1/2
  Z_t = Sum_{i=1}^{n} D_t(i)
  D_{t+1}(i) = D_t(i) / Z_t * e^(-a_t*y_i*h_t(x_i))
end&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(H(x) = sign \sum_{t=1}^{T} a_t * h_t(x)\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;for &lt;span class="math"&gt;\(a\)&lt;/span&gt; - if the error is a little bit above &lt;span class="math"&gt;\(\frac{1}{2}\)&lt;/span&gt;, we can say we learned a little bit&lt;/li&gt;
&lt;li&gt;where &lt;span class="math"&gt;\(Z_t = \sum_{i=1}^{n} D_t(i)\)&lt;/span&gt; is the normalization factor assuring a probability distribution of &lt;span class="math"&gt;\(D_t(i)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;also &lt;span class="math"&gt;\(D_{t+1}(i)\)&lt;/span&gt; equals: &lt;span class="math"&gt;\[
D_{t+1}(i) = \frac{D_t(i)}{Z_t} * \begin{cases}
  e^{-a_t} &amp;amp; \text{if } h_t(x_i) = y_i \\
  e^{a_t} &amp;amp; \text{if } h_t(x_i \neq y_i)
\end{cases}
\]&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Small decision trees and decision stamps work fine and are pretty fast with AdaBoost. Noisy data results in AdaBoost over-fitting it (because of increased weights). Prevent this by limiting the maximal weight (eg. MadaBoost).&lt;/p&gt;
&lt;p&gt;Explanation is harder than random forests. A general explanation method for predictions (E. Štrumbelj). Make a small modification in the attribute and you estimate its influence on the prediction. This way you can explain what influence each attribute has. From the perspective of game theory it is possible to show that the explanation is correct. Perturbation method with clever sampling.&lt;/p&gt;
&lt;h2 id="stacking"&gt;Stacking&lt;/h2&gt;
&lt;p&gt;Basic models give you predictions for each instance and you add them as additional features (second level learner). In addition to basic attributes you also get additional features as predictions of some models.&lt;/p&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset &lt;span class="math"&gt;\(D = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;first level learners &lt;span class="math"&gt;\(L_1, L_2, ..., L_t\)&lt;/span&gt; (any strong learners, eg. random forest, boosting, neural networks)&lt;/li&gt;
&lt;li&gt;second level algorithm &lt;span class="math"&gt;\(L\)&lt;/span&gt; (frequently generalized linear model (GLM))&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for i = 1 to T {
  H_t = L_t(D)
}
D' = {}
for i = 1 to n {
  generate new dataset
  for t = 1 to T {
    Z_{i,t} = h_t(X_i)
  }
  D' = D' union {((Z_{i,1}, Z_{i,2}, ..., Z_{i,T}), y_i)}
}
h' = L(D')&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(H(x) = h'(h_1(x), h_2(x), ..., h_T(x))\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;we try to figure out where each classifier was wrong and improve that&lt;/li&gt;
&lt;li&gt;can be seen as a generalization of bagging&lt;/li&gt;
&lt;li&gt;if base learners are capable of outputting the probability, also probabilities of each class can be for each one (&lt;span class="math"&gt;\(p_1, p_2, ..., p_c\)&lt;/span&gt;, &lt;span class="math"&gt;\(t * c + 1\)&lt;/span&gt; columns)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="assignment"&gt;Assignment&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(bias = E[(f_i - h_i)^2]\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(var = E[(h_i - \overline{h_i})^2]\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;randomly select/sample from 1/2 of dataset&lt;/li&gt;
&lt;li&gt;plot bias-variance decomposition&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="clustering-ensembles"&gt;Clustering ensembles&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-12&lt;/p&gt;
&lt;p&gt;Clustering is about getting a general overview of data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How to generate a single clustering?
&lt;ul&gt;
&lt;li&gt;k-means, k-medoids, hierarchical, RF…&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;How to combine several clusterings?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Base clustering example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(\lambda^{(1)} = \{1,1,2,2,2,3,3\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;3 classes (&lt;span class="math"&gt;\(1: x_1, x_2\)&lt;/span&gt;; &lt;span class="math"&gt;\(2: x_3, x_4, x_5\)&lt;/span&gt;; &lt;span class="math"&gt;\(3: x_6, x_7\)&lt;/span&gt;) (exactly same as &lt;span class="math"&gt;\(\lambda^{(2)} = \{3,3,1,1,1,2,2\}\)&lt;/span&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Form a similarity matrix &lt;span class="math"&gt;\(M\)&lt;/span&gt; (dim. &lt;span class="math"&gt;\(n*m\)&lt;/span&gt;), where &lt;span class="math"&gt;\(M(i,j)\)&lt;/span&gt; is similarity score of instance &lt;span class="math"&gt;\(x_i\)&lt;/span&gt;, &lt;span class="math"&gt;\(x_j\)&lt;/span&gt; (eg. 1 if in same cluster, 0 otherwise).&lt;/p&gt;
&lt;h3 id="similarity-ensemble-for-clustering"&gt;Similarity ensemble for clustering&lt;/h3&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset &lt;span class="math"&gt;\(D = \{x_1,...,x_n\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;clusterer &lt;span class="math"&gt;\(L\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for(q = 1 to r) {  // r different clusterings
  lambda^{(q)} = L^{(q)}(D)  // form a clustering with k^{(q)} clusters
  produce similarity matrix M^{(q)} based on lambda^{(q)}
}
M = 1/r * sum_{q=1}^r (M^{(q)})  // consensus similarity&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ensemble clustering: &lt;span class="math"&gt;\(\lambda = L(M)\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="algorithm-aode-averaged-one-dependence-estimator"&gt;Algorithm: AODE (Averaged One-Dependence estimator)&lt;/h3&gt;
&lt;p&gt;Recall that Naive Bayes assumes independence that a class attribute &lt;span class="math"&gt;\(y \in \{c_1,c_2,...c_k\}\)&lt;/span&gt; is dependent on individual attributes:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
P(y|x_1,x_2,...,x_a) = \frac{P(y,x_1,...x_a)}{P(x_1,...x_a)} = \frac{P(y) \prod_{i=1}^{a} P(x_i|y)}{P(x_1,,...x_a)}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Only nominator part is relevant: &lt;span class="math"&gt;\(h(x) = \max_y P(y) \prod_{i=1}^{a} P(x_i|y)\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Relaxation of independence assumption and add just one attribute into the equation: &lt;span class="math"&gt;\(h_j(x) = \max_y P(y,x_j) \prod_{i=1,i \ne j}^{a} P(x_i|y,x_j)\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;One classifier for each attribute, each attribute can be conditionally dependent on each classifier and from this ones we can build an ensemble. Eg. 10 classes and 20 values in an attribute, we have to assess 200 combinations, but the fragmentation of the data will increase and we need lots of data.&lt;/p&gt;
&lt;p&gt;Averaging over all relaxed classifiers we get AODE:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
h(x) = \frac{1}{a} \sum_{j=1}^{a} h_j(x)
\]&lt;/span&gt;&lt;/p&gt;
&lt;h3 id="algorithm-mars-multivariate-adaptive-regression-splines"&gt;Algorithm: MARS (Multivariate Adaptive Regression Splines)&lt;/h3&gt;
&lt;p&gt;It is a variant of step-wise regression and it even works well for large number of attributes.&lt;/p&gt;
&lt;p&gt;two basic functions (learners): &lt;span class="math"&gt;\((x-t)_+\)&lt;/span&gt;, &lt;span class="math"&gt;\((t-x)_+\)&lt;/span&gt; (linear function, &lt;span class="math"&gt;\(t\)&lt;/span&gt; given point, &lt;span class="math"&gt;\(+\)&lt;/span&gt; indicates only positive parts)&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/algorithm-mars-functions.jpg"&gt;&lt;img alt="Algorithm MARS functions" height="452" src="http://gw.tnode.com/student/img/algorithm-mars-functions.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Algorithm MARS functions&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
(x-t)_+ = \begin{cases}
  x-t &amp;amp; \text{if } x &amp;gt; t \\
  0 &amp;amp; \text{else}
\end{cases}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
(t-x)_+ = \begin{cases}
  t-x &amp;amp; \text{if } x &amp;lt; t \\
  0 &amp;amp; \text{else}
\end{cases}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;We put two such functions for &lt;span class="math"&gt;\(t\)&lt;/span&gt; in every value of every attribute and than we want to combine them into an ensemble. We combine all by selecting only the useful ones. We multiply basic functions together.&lt;/p&gt;
&lt;p&gt;Idea:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;collection &lt;span class="math"&gt;\(C = \{(x_j-t)_+, (t-x_j)_+; t \in \{x_{1j}, x_{2j},...x_{nj}\}; j \in \{1,2,...a\}\}\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(f(x) = \beta_0 + \sum_{m=1}^{M} \beta_m h_m(x)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(h_m(x) \in C\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;effectively we multiply all collections &lt;span class="math"&gt;\(C \times C \times ...C^i\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(\beta_m\)&lt;/span&gt; – learn coefficients learn a generalized linear model (using least squares criterion)&lt;/li&gt;
&lt;li&gt;candidates consist of pairs of functions for each attribute value&lt;/li&gt;
&lt;li&gt;by multiplying more and more line segments we approximate the target function more and more&lt;/li&gt;
&lt;li&gt;we’ll probably over-fit it&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code (iteratively create an ensemble members):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;start with a constant (&lt;span class="math"&gt;\(1\)&lt;/span&gt;)&lt;/li&gt;
&lt;li&gt;repeat until (satisfied)&lt;/li&gt;
&lt;li&gt;select best candidate multiplied with existing members&lt;/li&gt;
&lt;li&gt;add to model&lt;/li&gt;
&lt;li&gt;prune the model (using regularization or heuristic based on complexity)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="const-sensitive-learning-ensembles"&gt;Const-sensitive learning ensembles&lt;/h2&gt;
&lt;p&gt;Misclassification cost is the cost when we make an error. Represented as values in a matrix between predicted and actual classifications. Asymmetric misclassification cost matrix introduces cost-sensitive learning.&lt;/p&gt;
&lt;p&gt;One re-sampling approach is done in boosting-type of algorithms where we push instances of high cost to classifiers.&lt;/p&gt;
&lt;h3 id="algorithm-metacost-based-on-bagging"&gt;Algorithm: MetaCost (based on bagging)&lt;/h3&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;training data &lt;span class="math"&gt;\(D\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;cost matrix &lt;span class="math"&gt;\(C\)&lt;/span&gt;, &lt;span class="math"&gt;\(c_{ij}\)&lt;/span&gt; is the cost of misclassifying instance of class &lt;span class="math"&gt;\(i\)&lt;/span&gt; as class &lt;span class="math"&gt;\(j\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for(i=1 to T) {  // T number of basic models
  D_i = subsample of D with n instances (bootstrap)
  M_i = L(D_i)
  for(each x in D) {
    for(each class j) {
      P(j|x) = 1/T * sum_{i=1}^{T} (P(j|x,M_i))
    }
  }
  assign a label to x as: arg min_{i \in \{1,...c\}} sum_{j \in \{1,...c\}} P(j|x)*C_{ji} (we weight all probabilities by their cost)
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(L(\{\text{a new dataset of }(x, \text{assigned class values})\})\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unbalanced dataset is where instances of one class are more probable than the other (eg. 1: 1%; 0: 99%). One possible approach is to use a cost matrix. Learning such data can result in overfitting.&lt;/p&gt;
&lt;p&gt;Two approaches:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;oversampling&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;technique SMOTE: compute some neighborhood an linearly in between the instances of different classes create new instances, put them based on previous clustering, density&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;undersampling&lt;/em&gt; (throw away some data)
&lt;ul&gt;
&lt;li&gt;search for instances near the border, other can be thrown away (similar to support vectors)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="algorithm-easyensemble"&gt;Algorithm: EasyEnsemble&lt;/h3&gt;
&lt;p&gt;Ideas is to repeat the under-sampling enough times &lt;span class="math"&gt;\(T \sim \frac{|N|}{|P|}\)&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Input:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dataset &lt;span class="math"&gt;\(D\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;minority class examples &lt;span class="math"&gt;\(P \in D\)&lt;/span&gt; (positive, outliers, precious)&lt;/li&gt;
&lt;li&gt;majority class instances &lt;span class="math"&gt;\(N \in D\)&lt;/span&gt; (negative)&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(T\)&lt;/span&gt; – number of subsets from &lt;span class="math"&gt;\(N\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pseudo-code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for(i=1 to T) {
  find a random subset N_i \in N with |N_i| = |P|
  use N_i and P to train AdaBoost ensemble H_i(x) = sign \sum_{j=1}^{S_i} a_{ij} * h_{ij}(x)
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(H(x) = \operatorname{sign} \sum_{i=1}^{T} H_i(x)\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="student"></category></entry><entry><title>[AA-part1] Parallel architectures</title><link href="http://gw.tnode.com/student/aa-part1/" rel="alternate"></link><updated>2014-03-17T00:00:00+01:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-02-24:student/aa-part1/</id><summary type="html">
&lt;p&gt;&lt;strong&gt;Course&lt;/strong&gt;: &lt;a href="https://ucilnica.fri.uni-lj.si/course/view.php?id=89"&gt;https://ucilnica.fri.uni-lj.si/course/view.php?id=89&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Lecturer&lt;/strong&gt;: prof. dr. Dušan Kodek&lt;br/&gt;&lt;strong&gt;Language&lt;/strong&gt;: English, Slovenian&lt;br/&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-02-24&lt;/p&gt;
&lt;p&gt;Lecturers (3 parts):&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;prof. dr. Dušan Kodek - Parallel architectures, SIMD, MIMD&lt;/li&gt;
&lt;li&gt;prof. dr. Borut Robič - Parallel algorithms, PRAM&lt;/li&gt;
&lt;li&gt;doc. dr. Tomaž Dobravec - CUDA architecture, OpenCL&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The purpose of this course is to introduce students to the field of parallel computing which is in many areas becoming a basic tool for problem solving. The topics include architectures of parallel computers as well as algorithms that are needed for this type of computation and are closely related to a given architecture. The structure of the course will allow students to use theoretical knowledge for the practical design of parallel computer systems and parallel algorithms that can be used for complex problem solving. The latest parallel computers will be studied as examples and the advanced tools for solving a typical parallel problem will be given.&lt;/p&gt;
&lt;p&gt;Course overview (part 1):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;limitations of serial computation: IPC, Tomasulo algorithm, superscalar and VLIW approach&lt;/li&gt;
&lt;li&gt;development of parallel architectures and technology: vector, SIMD and MIMD&lt;/li&gt;
&lt;li&gt;interprocessor communication problems and interconnect networks&lt;/li&gt;
&lt;li&gt;review of architectures used in the most powerful parallel computers to date&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Exams depend on the lecturers, are more individual, and you contact them when you are ready. Basic structure:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;homework (seminar or written exam at home)&lt;/li&gt;
&lt;li&gt;oral exam&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Kontaktna oseba za litraturo (dve mapi) je as. A. Božiček. Oglasit pri prof. D. Kodeku, da dobiš domačo nalogo, ki jo je potrebno rešiti pred ustnim izpitom.&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;h3 id="reasons-for-parallel-computation"&gt;Reasons for parallel computation&lt;/h3&gt;
&lt;p&gt;Transistors/chip: 1971 ~ &lt;span class="math"&gt;\(10^4 / \text{chip}\)&lt;/span&gt; .. 2014 ~ &lt;span class="math"&gt;\(10^{10} / \text{chip}\)&lt;/span&gt;; ~1000000x more&lt;/p&gt;
&lt;p&gt;Clock frequency (speed/transistor): 1971 ~ &lt;span class="math"&gt;\(1 \text{MHz}\)&lt;/span&gt; .. 2014 ~ &lt;span class="math"&gt;\(5 \text{GHz}\)&lt;/span&gt;; ~5000x more&lt;/p&gt;
&lt;p&gt;Chip size/speed: ~200x more&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Number of transistors per chip doubles every 18 months.&lt;/p&gt;
&lt;footer&gt;
&lt;cite&gt;Moore’s law (1965, 1975 corrected)&lt;/cite&gt;
&lt;/footer&gt;
&lt;/blockquote&gt;
&lt;p&gt;Amdahl’s law represents a limit on speedup.&lt;/p&gt;
&lt;p&gt;We really have no choice.&lt;/p&gt;
&lt;h3 id="flynn-classification-m.-j.-flynn-1966"&gt;Flynn classification &lt;small&gt;(M. J. Flynn, 1966)&lt;/small&gt;&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;+-----------+ &amp;lt;--(instructions, m_I)---- +--------+
| Processor |                            | Memory |
+-----------+ &amp;lt;--(data/operands, m_D)--&amp;gt; +--------+&lt;/code&gt;&lt;/pre&gt;
&lt;ol type="1"&gt;
&lt;li&gt;SISD (single instruction single data): &lt;span class="math"&gt;\(m_I = 1\)&lt;/span&gt;, &lt;span class="math"&gt;\(m_D = 1\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;SIMD (single instruction multiple data): &lt;span class="math"&gt;\(m_I = 1\)&lt;/span&gt;, &lt;span class="math"&gt;\(m_D = N &amp;gt; 1\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;MISD doesn’t exist&lt;/li&gt;
&lt;li&gt;MIMD (multiple instruction multiple data, multicomputers): &lt;span class="math"&gt;\(m_I = M \gg 1\)&lt;/span&gt; (thousands), &lt;span class="math"&gt;\(m_D = N \gg 1\)&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Types of parallelism:&lt;/p&gt;
&lt;ol type="a"&gt;
&lt;li&gt;Instruction parallelism (difficult because threads need to be identified before parallelization)&lt;/li&gt;
&lt;li&gt;Data/operand parallelism&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In list of &lt;a href="http://www.top500.org/lists/2013/11/"&gt;Top500 supercomputers&lt;/a&gt; Tianhe-2 is the current top.&lt;/p&gt;
&lt;p&gt;Enormous heat problem on each switch of transistor, each wire works as a resistor (heat energy = &lt;span class="math"&gt;\(C * U^2 / 2\)&lt;/span&gt;). Voltage is decreasing from &lt;span class="math"&gt;\(5 \text{V}\)&lt;/span&gt;, &lt;span class="math"&gt;\(3 \text{V}\)&lt;/span&gt;, …, &lt;span class="math"&gt;\(0.75 \text{V}\)&lt;/span&gt;, but silicon transistors do not work bellow this. Power and heat give an upper limit on how big a processor can be.&lt;/p&gt;
&lt;h3 id="limitations-of-parallelism"&gt;Limitations of parallelism&lt;/h3&gt;
&lt;p&gt;&lt;cite&gt;Amdahl’s law (G. M. Amdahl, 1967)&lt;/cite&gt;:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
S(N) = \frac{1}{f + (1 - f) / N}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(N\)&lt;/span&gt; .. number of processors&lt;br/&gt;&lt;span class="math"&gt;\(S(N)\)&lt;/span&gt; .. speedup&lt;br/&gt;&lt;span class="math"&gt;\(f\)&lt;/span&gt; .. sequential part&lt;br/&gt;&lt;span class="math"&gt;\(1 - f\)&lt;/span&gt; .. parallel part&lt;/p&gt;
&lt;p&gt;Eg.: &lt;span class="math"&gt;\(f = 0.1 \rightarrow S(N) \leq 10\)&lt;/span&gt;; &lt;span class="math"&gt;\(f = 0.001 \rightarrow S(N) \leq 1000\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Parallelization is not universal, it is good for some problems, but not for all problems (eg. simplex algorithm is difficult to parallelize).&lt;/p&gt;
&lt;h2 id="sisd-computers"&gt;SISD computers&lt;/h2&gt;
&lt;h3 id="where-is-the-problem"&gt;Where is the problem?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;pipeline, &lt;span class="math"&gt;\(CPI \geq 1\)&lt;/span&gt; (clocks per instruction close to 1)&lt;/li&gt;
&lt;li&gt;pipeline hazards:
&lt;ol type="1"&gt;
&lt;li&gt;data hazard (unavailable operands):
&lt;ul&gt;
&lt;li&gt;read after write (RAW) (flow dependence)&lt;/li&gt;
&lt;li&gt;write after write (WAW) (antidependence)&lt;/li&gt;
&lt;li&gt;write after read (WAR) (output dependence)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;control hazard (branch instructions)&lt;/li&gt;
&lt;li&gt;stuctural hazard (busy functional units)&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="elimination-of-data-hazards"&gt;Elimination of data hazards&lt;/h3&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;&lt;span class="kw"&gt;FDIV&lt;/span&gt; F0, F5, F6  &lt;span class="co"&gt;; F0&amp;lt;-F5/F6&lt;/span&gt;
&lt;span class="kw"&gt;FADD&lt;/span&gt; F4, F0, F2
&lt;span class="kw"&gt;FSUB&lt;/span&gt; F8, F2, F1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Register &lt;code&gt;F0&lt;/code&gt;: RAW hazard (~1960) is a true data hazard and nothing can be done, except dynamic execution (reordering of instructions).&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;&lt;span class="kw"&gt;FDIV&lt;/span&gt; F0, F5, F6
&lt;span class="kw"&gt;FADD&lt;/span&gt; F4, F0, F2
&lt;span class="kw"&gt;FST&lt;/span&gt; &lt;span class="dv"&gt;0&lt;/span&gt;(R1), F4
&lt;span class="kw"&gt;FSUB&lt;/span&gt; F2, F3, F7
&lt;span class="kw"&gt;FMUL&lt;/span&gt; F4, F3, F2&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Register &lt;code&gt;F2&lt;/code&gt;: WAR (antidependency)&lt;/p&gt;
&lt;p&gt;Register &lt;code&gt;F4&lt;/code&gt;: WAW (output dependency)&lt;/p&gt;
&lt;p&gt;WAW and WAR hazards represent naming dependencies that can be solved by renaming registers (&lt;code&gt;F2&lt;/code&gt;&lt;span class="math"&gt;\(\rightarrow\)&lt;/span&gt;&lt;code&gt;FT1&lt;/code&gt;, &lt;code&gt;F4&lt;/code&gt;&lt;span class="math"&gt;\(\rightarrow\)&lt;/span&gt;&lt;code&gt;FT2&lt;/code&gt;):&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;&lt;span class="kw"&gt;FDIV&lt;/span&gt; F0, F5, F6
&lt;span class="kw"&gt;FADD&lt;/span&gt; FT2, F0, F2
&lt;span class="kw"&gt;FST&lt;/span&gt; &lt;span class="dv"&gt;0&lt;/span&gt;(R1), FT2
&lt;span class="kw"&gt;FSUB&lt;/span&gt; FT1, F3, F7
&lt;span class="kw"&gt;FMUL&lt;/span&gt; F4, F3, FT1&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Who should do this renaming?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;programmer or compiler (software): same programs use many more registers, program can not adapt to newer processors with more registers&lt;/li&gt;
&lt;li&gt;hardware solutions (should solve in &lt;span class="math"&gt;\(1\)&lt;/span&gt; instruction per &lt;span class="math"&gt;\(0.145 \text{ns}\)&lt;/span&gt;):
&lt;ul&gt;
&lt;li&gt;scoreboarding (1963, CDC 6600)&lt;/li&gt;
&lt;li&gt;Tomasulo algorithm (1967, IBM 360/91)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="tomasulo-algorithm"&gt;Tomasulo algorithm&lt;/h3&gt;
&lt;p&gt;(Chapter 7 in Kodek book.)&lt;/p&gt;
&lt;p&gt;Recall that all processors (except simple) consist of multiple functional units (*, %, +/-, store, load) (sequential and parallel) that can also be pipelined or not. Each functional unit has a reservation station (FIFO or different). Waiting instructions IR are placed in one of the reservation stations if they are available (otherwise it waits) with either real or virtual values (until real values are available instruction is not executed).&lt;/p&gt;
&lt;p&gt;Each reservation station has 6 parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(O_p\)&lt;/span&gt; - operation to perform, each unit may have more operations&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(U_j\)&lt;/span&gt;, &lt;span class="math"&gt;\(U_k\)&lt;/span&gt; - addresses of the reservation station that will produce results (for those that are waiting)&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(V_j\)&lt;/span&gt;, &lt;span class="math"&gt;\(V_k\)&lt;/span&gt; - real values of 2 input operands/results&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(R_j\)&lt;/span&gt;, &lt;span class="math"&gt;\(R_k\)&lt;/span&gt; - flags indicating when &lt;span class="math"&gt;\(V_j\)&lt;/span&gt; and &lt;span class="math"&gt;\(V_k\)&lt;/span&gt; are ready&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(B\)&lt;/span&gt; - busy flag indicates reservation station (if all are busy there is a structural hazard and the instruction must wait)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Output from the functional unit is placed on common data bus (CDB) and goes back to all reservation stations that check if they need the resulting value (format, value, &lt;span class="math"&gt;\(U_j\)&lt;/span&gt;, &lt;span class="math"&gt;\(U_k\)&lt;/span&gt;). The result also must also be written to programmer accessible registers of the architecture.&lt;/p&gt;
&lt;p&gt;Implicit renaming is done at the issue stage when instructions are written into reservation stations. This resolved WAW and WAR hazards.&lt;/p&gt;
&lt;p&gt;Processing stalls if the number of reservation stations for a functional unit is not high enough (or in case of jump instructions), but this rarely happens. Newer Intel processors have 50-60 places in reservation stations. The results get out in a different order, but renaming writes correct values.&lt;/p&gt;
&lt;h3 id="what-happens-if-there-is-a-branch-jump"&gt;What happens if there is a branch (jump)?&lt;/h3&gt;
&lt;p&gt;In this version everything stops and waits for it to complete. Improvement of Tomasulo’s algorithm allows speculative execution of instructions.&lt;/p&gt;
&lt;p&gt;Speculation consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;branch prediction&lt;/li&gt;
&lt;li&gt;speculative execution using a reorder buffer&lt;/li&gt;
&lt;li&gt;once the branch is resolved:
&lt;ol type="1"&gt;
&lt;li&gt;correct prediction: ok&lt;/li&gt;
&lt;li&gt;wrong prediction: remove results of speculative execution from reorder buffer&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Implementation is done by adding a reorder buffer (FIFO queue) to Tomasulo’s algorithm, working with it instead of directly with programmer accessible registers, and commit results in order instructions were given. But even with this we still have &lt;span class="math"&gt;\(CPI \geq 1\)&lt;/span&gt;. Newer computers (starting with Pentium Pro) use superscalarity (multiple-issue) and are capable of going to &lt;span class="math"&gt;\(CPI \approx 0.5\)&lt;/span&gt;.&lt;/p&gt;
&lt;h2 id="ponovitev"&gt;Ponovitev&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-03&lt;/p&gt;
&lt;p&gt;Za redke probleme primerna velika paralelnost: napovedovanje vremena, kemijske reakcije, analiza dogajanja pri atomskih eksplozijah…&lt;/p&gt;
&lt;p&gt;Pohitritev SISD:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tomasulov algoritem&lt;/li&gt;
&lt;li&gt;špekulativno izvrševanje ukazov z ROB (preureditveni izravnalnik)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Še vedno pa je: &lt;span class="math"&gt;\(CPI \geq 1\)&lt;/span&gt;, &lt;span class="math"&gt;\(IPC = \frac{1}{CPI}\)&lt;/span&gt;&lt;/p&gt;
&lt;h3 id="večizvršitveni-računalniki"&gt;Večizvršitveni računalniki&lt;/h3&gt;
&lt;p&gt;Želimo doseči &lt;span class="math"&gt;\(IPC &amp;gt; 1\)&lt;/span&gt; brez spreminjanja programov.&lt;/p&gt;
&lt;p&gt;Poznamo pristopa:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;superskalarni procesorji&lt;/li&gt;
&lt;li&gt;VLIW/EPIC procesorji (dolgi ukazi)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Primer:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(n\)&lt;/span&gt;-izstavitveni procesor&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(x_i = f(x_j, x_n)\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;medsebojna odvisnost operandov (RAW, WAR, WAW)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Potrebno število kontrolnih primerjav:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\(1\)&lt;/span&gt;. ukaz &lt;span class="math"&gt;\(0\)&lt;/span&gt; primerjav&lt;br/&gt;&lt;span class="math"&gt;\(2\)&lt;/span&gt;. ukaz &lt;span class="math"&gt;\(2\)&lt;/span&gt; primerjavi&lt;br/&gt;&lt;span class="math"&gt;\(3\)&lt;/span&gt;. ukaz &lt;span class="math"&gt;\(2*2\)&lt;/span&gt; primerjav&lt;br/&gt;&lt;span class="math"&gt;\(n\)&lt;/span&gt;. ukaz &lt;span class="math"&gt;\((n-1)*2\)&lt;/span&gt; primerjav&lt;/p&gt;
&lt;p&gt;Skupaj: &lt;span class="math"&gt;\(2 * \sum_{i=1}^{n-1} i = 2 * \frac{(n-1)*n}{2} = n^2 - n\)&lt;/span&gt; primerjav&lt;/p&gt;
&lt;ol type="a"&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Superskalarni procesorji&lt;/strong&gt;: strojno ugotavljanja/odpravljanje medsebojnih odvisnosti:&lt;/p&gt;
&lt;p&gt;procesor, leto, prevzem - izstavi - izvrši, funkcijskih enot&lt;br/&gt;IBM RS/6000, 1990, 2-2-2, 2&lt;br/&gt;Pentium Pro, 1995, 3-3-3, 5&lt;br/&gt;DEC Alpha, 1998, 4-4-11, 6&lt;br/&gt;Pentium 4, 2000, 3-3-4, 7&lt;br/&gt;Core 7, 2009, 6-6-4, 15, 30 primerjalnikov&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VLIW procesorji&lt;/strong&gt; (very long instruction word) (EPIC je kratica pri Intel Itanium): dolgi ukazi s fiksnim številom običajnih ukazov, ki se lahko izvršujejo paralelno&lt;/p&gt;
&lt;p&gt;Gre za programsko ugotavljanje/odpravljanje medsebojnih odvisnosti, kjer prevajalnik ugotovi kako razporediti (sprememba arhitekture zahteva recompileanje). Predvsem uspešno pri signalnih procesorjih, drugje nerodno.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="omejitve-paralelizma-na-nivoju-ukazov"&gt;Omejitve paralelizma na nivoju ukazov&lt;/h3&gt;
&lt;p&gt;Če je res, da lahko povečamo zmogljivost tako, da povečujemo število hkrati izstavljenih ukazov, se pojavi vprašanje koliko je sploh možno, če omejitev ne bi bilo?&lt;/p&gt;
&lt;p&gt;Zamislimo si idealen superskalarni procesor:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;neomejeno število registrov&lt;/li&gt;
&lt;li&gt;vse napovedi skokov so pravilne&lt;/li&gt;
&lt;li&gt;nobenih zgrešitev v predpomnilnikih&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Kakšen je &lt;span class="math"&gt;\(IPC\)&lt;/span&gt;? Na množici programov SPEC92:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;na idealnem: od 17.9 do 150 (povprečje &lt;span class="math"&gt;\(\approx 80\)&lt;/span&gt;)&lt;/li&gt;
&lt;li&gt;na &lt;span class="math"&gt;\(n = 50\)&lt;/span&gt;: od 80 do 45&lt;/li&gt;
&lt;li&gt;3% napaka pri napovedi skokov: od 45 do 23&lt;/li&gt;
&lt;li&gt;pri 256 registrih: od 23 do 16&lt;/li&gt;
&lt;li&gt;resnični procesorji imajo IPC: od 2 do 4&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Proizvajalci ne povečujejo &lt;span class="math"&gt;\(IPC\)&lt;/span&gt; (kar bi profesionalni uporabniki želeli), ampak dodajajo več jeder (cores). To je lažje, pri marketingu hitrost preprosto pomnožijo, proizvajalci zmagali bitko za &lt;span class="math"&gt;\(IPC\)&lt;/span&gt;…&lt;/p&gt;
&lt;h3 id="paralelnost-na-nivoju-niti"&gt;Paralelnost na nivoju niti&lt;/h3&gt;
&lt;p&gt;Nit (thread) je zaporedje ukazov in pripadajočih operandov, ki se lahko izvršuje neodvisno od ostalih niti.&lt;/p&gt;
&lt;p&gt;Programer je tisti, ki mora identificirati niti in poskrbeti za njihovo sinhronizacijo. Npr. diskretna Furierjeva transformacija, množenje matrik, seštevanje…&lt;/p&gt;
&lt;h2 id="vektorski-računalniki-sisd"&gt;Vektorski računalniki (SISD)&lt;/h2&gt;
&lt;p&gt;Spadajo pod SISD, ne SIMD. Čeprav je en ukaz, se operacije izvedejo zaporedno.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
a(i) = b(i) + c(i), i=1..N
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Pri njih izkoriščamo podatkovno (operandno) paralelnost. Prednosti:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;elementi vektorja so skoraj vedno medseboj neodvisni (ni operandnih nevarnosti)&lt;/li&gt;
&lt;li&gt;en vektorski ukaz pomeni &lt;span class="math"&gt;\(N\)&lt;/span&gt; operacij –&amp;gt; manj ukazov (Flynnovo ozko grlo se odpravi)&lt;/li&gt;
&lt;li&gt;veliko manj kontrolnih nevarnosti pri skokih (manj zank)&lt;/li&gt;
&lt;li&gt;dostop do pomnilnika je regularen –&amp;gt; pomnilniško prepletanje je učinkovito&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Ozko grlo prenosa podatkov med CPE in pomnilnikom se rešuje s širšim vodilom in s pomnilniškim prepletanjem (&lt;span class="math"&gt;\(m\)&lt;/span&gt; modulov (npr. 16, 32, 64, 128, 256)).&lt;/p&gt;
&lt;p&gt;Najbolj znana serija Cray (Cray 1, Cray X-M2…), ki so bili najhitrejši računalniki do začetka ~90-ih let. Te serije so imele sledeče funkcijske enote za operacije v plavajoči vejici (fiksna vejica ni relevantna):&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;+/-, 6&lt;/li&gt;
&lt;li&gt;*, 7&lt;/li&gt;
&lt;li&gt;1/x, 17&lt;/li&gt;
&lt;li&gt;load, 9 do 17&lt;/li&gt;
&lt;li&gt;store, 9 do 17&lt;/li&gt;
&lt;li&gt;gather/scatter, 6&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Zgradba:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;vsak program ima skalarni in vektorski del&lt;/li&gt;
&lt;li&gt;vektorski registri (&lt;span class="math"&gt;\(N = 64, 128, 256\)&lt;/span&gt;) (nariše kot škatla z dvojno črto zgoraj in spodaj)&lt;/li&gt;
&lt;li&gt;skalarni registri (nariše kot škatla)&lt;/li&gt;
&lt;li&gt;vektorska funkcijska enota (more znati po vrsti jemati iz vektorskega registra, na vsakem izvesti operacijo in pisati nazaj v vektorski register po vrsti) (narišemo kot valj s črtami)&lt;/li&gt;
&lt;li&gt;vektorska load/store enota&lt;/li&gt;
&lt;li&gt;VL (vector length) register (je vhodni podatek vsem fukcijskim enotam, dolžina 1 je identično skalarnemu, a počasnejša)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Primer DAxPy double-precision (64-bit); SAxPy single-precision (32-bit) z &lt;span class="math"&gt;\(N = 64\)&lt;/span&gt;:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
\vec{y} = a * \vec{x} + \vec{y}
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Program na skalarnem računalniku:&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;LD F0, a
ADDI R4, Rx, #&lt;span class="dv"&gt;64&lt;/span&gt;
LD F2, &lt;span class="dv"&gt;0&lt;/span&gt;(Rx)      :&lt;span class="kw"&gt;loop&lt;/span&gt;
MULTD F2, F0, F2
LD F4, &lt;span class="dv"&gt;0&lt;/span&gt;(Ry)
ADDD F4, F2, F4
SD &lt;span class="dv"&gt;0&lt;/span&gt;(Ry), F4
ADDI Rx, Rx, #&lt;span class="dv"&gt;1&lt;/span&gt;
ADDI Ry, Ry, #&lt;span class="dv"&gt;1&lt;/span&gt;
&lt;span class="kw"&gt;SUB&lt;/span&gt; R5, R4, Rx
BNZ R5, &lt;span class="kw"&gt;loop&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(&lt;span class="math"&gt;\(2+9*64 = 578\)&lt;/span&gt; ukazov, &lt;span class="math"&gt;\(578\)&lt;/span&gt; urinih period)&lt;/p&gt;
&lt;p&gt;Program na vektorskem računalniku:&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;LD F0, a
LV V1, Rx         &lt;span class="co"&gt;; V1&amp;lt;-x&lt;/span&gt;
MULSU V2, F0, V1  &lt;span class="co"&gt;; V2&amp;lt;-F0*V1&lt;/span&gt;
LV V3, Ry         &lt;span class="co"&gt;; V3&amp;lt;-y&lt;/span&gt;
ADDV V4, V3, V2   &lt;span class="co"&gt;; V4&amp;lt;-V3+V2&lt;/span&gt;
SV Ry, V4&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(&lt;span class="math"&gt;\(6\)&lt;/span&gt; ukazov, &lt;span class="math"&gt;\(1+12+7+6+12+63 = 101\)&lt;/span&gt; urinih period)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chaining&lt;/strong&gt;: Procesor lahko začne izračun takoj, ko je en element operacije že naložen. Ni potrebno čakati, da se vektorski register do konca napolni. Med tem, ko se prvi shranjuje se en kasnejši še bere, a kljub temu ponavadi ne pride do težav.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Strip mining&lt;/strong&gt;: Če imamo preveč podatkov, resnični vektor predstavimo kot mnogokratnik vektorjev dolžine &lt;span class="math"&gt;\(64\)&lt;/span&gt;.&lt;/p&gt;
&lt;h3 id="problem-pogojno-izvršljivih-ukazov"&gt;Problem pogojno izvršljivih ukazov&lt;/h3&gt;
&lt;p&gt;Fortran koda:&lt;/p&gt;
&lt;pre class="sourceCode fortran"&gt;&lt;code class="sourceCode fortran"&gt;    &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="dv"&gt;100&lt;/span&gt; i&lt;span class="kw"&gt;=&lt;/span&gt;&lt;span class="dv"&gt;1&lt;/span&gt;,&lt;span class="dv"&gt;64&lt;/span&gt;
      &lt;span class="kw"&gt;if&lt;/span&gt;(A(i) ne &lt;span class="dv"&gt;0&lt;/span&gt;) &lt;span class="kw"&gt;then&lt;/span&gt; A(i) &lt;span class="kw"&gt;=&lt;/span&gt; A(i) &lt;span class="kw"&gt;-&lt;/span&gt; B(i)
&lt;span class="dv"&gt;100&lt;/span&gt; &lt;span class="kw"&gt;continue&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Izvajanje na vektorskem računalniku:&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;LV V1, Ra        &lt;span class="co"&gt;; V1&amp;lt;-A&lt;/span&gt;
LV V2, Rb        &lt;span class="co"&gt;; V2&amp;lt;-B&lt;/span&gt;
LD F0, #&lt;span class="dv"&gt;0&lt;/span&gt;        &lt;span class="co"&gt;; F0&amp;lt;-0&lt;/span&gt;
SNESV F0, V1     &lt;span class="co"&gt;; VM&amp;lt;-1 if V1(i)!=0 else 0; scalar not equal vector&lt;/span&gt;
SUBV V1, V1, V2
CUM              &lt;span class="co"&gt;; VM&amp;lt;-1 v vsa mesta&lt;/span&gt;
SV Ra, V1        &lt;span class="co"&gt;; A&amp;lt;-V1&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;VM register (64-bitni mask), &lt;span class="math"&gt;\(N=64\)&lt;/span&gt;, &lt;span class="math"&gt;\(\{N-1,N-2,...,0\}\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Problem pri redkih matrikah (&lt;span class="math"&gt;\(\vec{K}\)&lt;/span&gt;, &lt;span class="math"&gt;\(\vec{M}\)&lt;/span&gt; indeksna vektorja). Fortran koda:&lt;/p&gt;
&lt;pre class="sourceCode fortran"&gt;&lt;code class="sourceCode fortran"&gt;    &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="dv"&gt;100&lt;/span&gt; i&lt;span class="kw"&gt;=&lt;/span&gt;&lt;span class="dv"&gt;1&lt;/span&gt;,&lt;span class="dv"&gt;64&lt;/span&gt;
      A(K(i)) &lt;span class="kw"&gt;=&lt;/span&gt; A(K(i)) &lt;span class="kw"&gt;+&lt;/span&gt; C(M(i))
&lt;span class="dv"&gt;100&lt;/span&gt; &lt;span class="kw"&gt;continue&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Izvajanje na vektorskem računalniku:&lt;/p&gt;
&lt;pre class="sourceCode nasm"&gt;&lt;code class="sourceCode nasm"&gt;LV V1, Rk        &lt;span class="co"&gt;; V1&amp;lt;-K&lt;/span&gt;
LVI V2, (Ra+V1)  &lt;span class="co"&gt;; V2&amp;lt;-A(K), gather&lt;/span&gt;
LV V3, Rm
LVI V4, (Rc+V3)  &lt;span class="co"&gt;; V4&amp;lt;-C(M)&lt;/span&gt;
ADDV V5, V2, V4
SVI (Ra+V1), V5  &lt;span class="co"&gt;; pomn&amp;lt;-A(K), scatter&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="redukcija-vektorjev"&gt;Redukcija vektorjev&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-10&lt;/p&gt;
&lt;pre class="sourceCode fortran"&gt;&lt;code class="sourceCode fortran"&gt;    dot &lt;span class="kw"&gt;=&lt;/span&gt; &lt;span class="dv"&gt;0&lt;/span&gt;
    &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="dv"&gt;10&lt;/span&gt; i&lt;span class="kw"&gt;=&lt;/span&gt;&lt;span class="dv"&gt;1&lt;/span&gt;,&lt;span class="dv"&gt;64&lt;/span&gt;
&lt;span class="dv"&gt;10&lt;/span&gt;  dot &lt;span class="kw"&gt;=&lt;/span&gt; dot &lt;span class="kw"&gt;+&lt;/span&gt; A(i) &lt;span class="kw"&gt;*&lt;/span&gt; B(i)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Problema se ne da vektorizirat, če ne uporabimo trika za redukcijo vektorjev:&lt;/p&gt;
&lt;pre class="sourceCode fortran"&gt;&lt;code class="sourceCode fortran"&gt;    &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="dv"&gt;10&lt;/span&gt; i&lt;span class="kw"&gt;=&lt;/span&gt;&lt;span class="dv"&gt;1&lt;/span&gt;,&lt;span class="dv"&gt;64&lt;/span&gt;
&lt;span class="dv"&gt;10&lt;/span&gt;  Y(i) &lt;span class="kw"&gt;=&lt;/span&gt; A(i) &lt;span class="kw"&gt;*&lt;/span&gt; B(i)  ; vektorski ukaz
    dot &lt;span class="kw"&gt;=&lt;/span&gt; Y(&lt;span class="dv"&gt;1&lt;/span&gt;)          ; skalarni ukaz
    &lt;span class="kw"&gt;do&lt;/span&gt; &lt;span class="dv"&gt;20&lt;/span&gt; i&lt;span class="kw"&gt;=&lt;/span&gt;&lt;span class="dv"&gt;2&lt;/span&gt;,&lt;span class="dv"&gt;64&lt;/span&gt;
&lt;span class="dv"&gt;20&lt;/span&gt;  dot &lt;span class="kw"&gt;=&lt;/span&gt; dot &lt;span class="kw"&gt;+&lt;/span&gt; Y(i)    ; vektorski ukaz&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Parametri za učinkovitost vektorskega računalnika (za primerjavo):&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;span class="math"&gt;\(N_\inf\)&lt;/span&gt; - GFLOPS, teoretična zmogljivost, ki predpostavlja, da so vsi vektorji neskončno dolgi in vse funkcijske enote znajo delati z njimi&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(N_{1/2}\)&lt;/span&gt; - dolžina vektorja pri kateri se doseže &lt;span class="math"&gt;\(N_\inf / 2\)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(N_v\)&lt;/span&gt; - dolžina vektorja pri kateri postane vektorsko računanje hitrejše od skalarnega&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Npr. družina Cray kjer ista funkcijska enota za skalarne in vektorske ukaze:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;| $N_v = 2$, pri vektorskem ukazu je potrebno na začetku dolžino prenesti v funkcijsko enoto&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Zakaj niso PC-ji vektorski?&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;| Dandanes sicer SSE, vektorski.
| Pri vektorskih se izkorišča operandno paralelnost.
| Zakaj imajo vsi PC-ji enoto za operacije v plavajoči vejici, čeprav samo 10% programov uporablja. Nekoč koprocesor (&amp;lt;486), cenovno tako ugodnejše.
| Razlog: cena CPE (registri so dragi), dostop do pomnilnika (širše prenosne poti, DRAM za skalarne dovolj)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;GPU so postali popularni za GPGPU, a težko programirati, alternativa pa je Xeon Phi, ki ima 61-procesorjev in so programsko združljivi. Bomo videli kaj bo prevladalo.&lt;/p&gt;
&lt;p&gt;Družina Cray:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;| 1976  Cray-1
| ...
| 1996  Cray T90
| 2006  NEC
| MIMD  vektorski&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Gordon Bell Prize (nagrada za računalnik, ki rešuje resničen problem, običajno manj kot polovica teoretične) - realne hitrosti:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;| 1988  Cray Y-MP                    1 GFLOPS  zadnjič vektorski
| 1990  CM-2                        14 GFLOPS  zadnjič SIMD
| 2000  Grape-6                   1349 GFLOPS  MIMD
| 2010  Cray XP Jaguar            2330 TFLOPS  MIMD
| 2013  IBM Sequoia (BlueGene/Q)  14.4 PFLOPS  MIMD
| en core sodobnega PC teoretično 10&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="simd-računalniki"&gt;SIMD računalniki&lt;/h2&gt;
&lt;p&gt;Operandna/podatkovna paralelnost, ki jo je veliko lažje izkoriščati kot ukazno, saj programer že z definiranjem podatkovnih tipov omogoči izkoriščanje. Pri SIMD se ukazi v resnici izvedejo naenkrat, ne zaporedno kot pri vektorskih.&lt;/p&gt;
&lt;p&gt;Vsi procesorji (imamo maskiranje) izvedejo isti ukaz na &lt;span class="math"&gt;\(n\)&lt;/span&gt; operandih naenkrat. Imamo gosteči računalnik, ki se ukvarja z vhodom in izhodom, saj nima smisla, da se SIMD specializiran za računanje ukvarja s tem.&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/arhitektura-simd-racunalnikov.jpg"&gt;&lt;img alt="Arhitektura SIMD računalnikov" height="452" src="http://gw.tnode.com/student/img/arhitektura-simd-racunalnikov.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Arhitektura SIMD računalnikov&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;PE_i - Procesorski element/procesor&lt;br/&gt;PEM_i - Pomnilnik procesorske enote, ki vrača vrednosti nazaj v pomnilnik ali gosteči računalnik (za V/I)&lt;br/&gt;Povezovalna mreža za komunikacijo med PE&lt;/p&gt;
&lt;p&gt;Dva tipa:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;distributed memory: povezovalna mreža za direktno komunikacijo med PE (posebni ukazi, čas dostopa krajši), NUMA - non-uniform memory access (čas glede na naslov ni enak)&lt;/li&gt;
&lt;li&gt;shared memory: povezovalna mreža je povezana med PE_i in M_i ter se obnaša isto kot pomnilnik, ukazi za medprocesorsko komunikacijo niso potrebni&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Večina jih je prvega tipa, saj je dostopanje do namenske povezovalne mreže hitrejše (podobno kot pomnilniška hierarhija).&lt;/p&gt;
&lt;h3 id="povezovalne-mreže"&gt;Povezovalne mreže&lt;/h3&gt;
&lt;p&gt;Obstajajo problemi kjer je medprocesorske komunikacije malo (npr. računanje DFT), a na žalost niso vsi taki. Problem je kako narediti takšno komunikacijsko mrežo kjer lahko vsak komunicira z vsakim hitro.&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
R_i -&amp;gt; R_{f(i)}, i=0,1,...,n-1
\]&lt;/span&gt;&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-1-4.jpg"&gt;&lt;img alt="Povezovalne mreže stopnja 1-4" height="452" src="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-1-4.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Povezovalne mreže stopnja 1-4&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;em&gt;Stopnja 1&lt;/em&gt;: vodilo/bus, problem le en lahko naenkrat komunicira z enim&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Stopnja 2&lt;/em&gt;: pot ali cikel, vsak povezan s sosednjima, problem so konfliktni dostopi, kjer je potrebno dostopati&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Stopnja 3&lt;/em&gt;: binarno drevo, problem pri komunikaciji med skrajnima listoma; izboljšava je debelo drevo, kjer imamo v višjih vejah/deblu po več povezav s čimer zmanjšamo verjetnost konfliktov&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Stopnja 4&lt;/em&gt;: mreža/mesh, NEWS (north-east-west-south), 2D torus (avtomobilska guma, če še zunanje povežemo), vsako vozlišče povezano s 4 sosedi, pri komunikaciji med procesorji je linerana zveza skozi koliko vozlišč mora informacija potovati&lt;/p&gt;
&lt;p&gt;Stopnja 4 je postala popularna, ker je povezana z naravo velikega števila problemov (npr. vreme, potovanje vode), kjer se prostor razdeli na posamezne dele in noben dogodek ne deluje na daljavo ne da bi šel čez vmesne dele. Obstajajo pa tudi problemi matematične narave, kjer ni teh omejitev.&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-5.jpg"&gt;&lt;img alt="Povezovalne mreže stopnja 5" height="452" src="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-5.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Povezovalne mreže stopnja 5&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;em&gt;Stopnja 5&lt;/em&gt;: 3D torus, kot več NEWS ravnin ena nad drugo, vsako vozlišče povezano s 6 sosedi&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-n.jpg"&gt;&lt;img alt="Povezovalne mreže stopnja n" height="452" src="http://gw.tnode.com/student/img/povezovalne-mreze-stopnja-n.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Povezovalne mreže stopnja n&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;em&gt;Spremenljiva stopnja &lt;span class="math"&gt;\(n\)&lt;/span&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hiperkocka (&lt;span class="math"&gt;\(n\)&lt;/span&gt;-kocka): število vozlišč skozi katere more iti sporočilo je &lt;span class="math"&gt;\(log_2 n\)&lt;/span&gt;, ni veliko takšnih (npr. Connection Machine 2)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To so bile statične mreže, obstajajo pa tudi dinamične mreže, kjer imamo stikala in se lahko povežejo po svoje. Pri Butterfly lahko sestavimo poljubno mrežo, če imamo stikalo, ki je sposobno potezati na oba načina.&lt;/p&gt;
&lt;h3 id="zgled-illiac-4"&gt;Zgled: Illiac 4&lt;/h3&gt;
&lt;p&gt;Illiac 4 (1966-1972) izdelovali na Uni. Illinois, podjetje Westinghouse. Nekateri obtožujejo ta računalnik, da je upočasnil razvoj paralelnih računalnikov.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;cilj bil doseči 1000 MFLOPS (32-bitnih)&lt;/li&gt;
&lt;li&gt;4 kvadrante s po 64 PE/kvadrant&lt;/li&gt;
&lt;li&gt;realizirali le 1 kvadrant&lt;/li&gt;
&lt;li&gt;kontrolna enota brez pomnilnika, 64 PE s PEM, gostujoči PC B6500&lt;/li&gt;
&lt;li&gt;problem feritni pomnilnik, stroški ogromni (porabili 4x več)&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/arhitektura-illiac-4.jpg"&gt;&lt;img alt="Arhitektura Illiac 4" height="452" src="http://gw.tnode.com/student/img/arhitektura-illiac-4.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Arhitektura Illiac 4&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Hitrosti operacij:&lt;/p&gt;
&lt;p&gt;+/- 350ns&lt;br/&gt;* 450ns&lt;br/&gt;/ 2700ns&lt;br/&gt;load/store 350/300ns&lt;/p&gt;
&lt;p&gt;Posledično: &amp;lt; 3MFLOPS/PE x 64&lt;br/&gt;teoretično = 180MFLOPS&lt;br/&gt;dejansko = 15MFLOPS&lt;/p&gt;
&lt;p&gt;Primeri kaj se zgodi, če ljudje nekaj spregledajo…&lt;/p&gt;
&lt;p&gt;program čas rač. celoten čas&lt;br/&gt;I4TRES 10800s 22100s&lt;br/&gt;2D-TRANSONIC 1110s 2000s&lt;br/&gt;SAR 28s 52s&lt;/p&gt;
&lt;p&gt;Problem je bil prepočasen V/I (prenosi v in iz pomnilnika), prehitro računal.&lt;/p&gt;
&lt;h3 id="zgled-connection-machine-2"&gt;Zgled: Connection Machine 2&lt;/h3&gt;
&lt;p&gt;Connection Machine 2 (1987), naredili več kot 50, podjetje TCM (Thinking Machines Corporation). Ideja je bila naredit računalnik, ki je samo “number cruncher”, a uporaben za reševanje različnih problemov.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span class="math"&gt;\(2^{16}\)&lt;/span&gt; 1-bitnih procesorjev&lt;/li&gt;
&lt;li&gt;glavni pomnilnik imenovan Nexus&lt;/li&gt;
&lt;li&gt;4 ali več gostečih (front-end) računalnikov, ki pripravlja stvari preden jih da v delu&lt;/li&gt;
&lt;li&gt;&lt;span class="math"&gt;\(4\)&lt;/span&gt; kvadrantov, vsak s &lt;span class="math"&gt;\(2^{14} = 16384\)&lt;/span&gt; procesorji, povezani z Nexus&lt;/li&gt;
&lt;li&gt;poseben V/I sistem z 1 do 8 “data vaults” (narejen iz diskov), da lahko podatke dovolj hitro prebere&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;2^4 = 16 1-bitnih procesorjev/plošči = 1 vozlišče&lt;br/&gt;2&lt;sup&gt;{16}/2&lt;/sup&gt;4 = 2^{12}, 12-hiperkocka&lt;br/&gt;2 x 16 imamo 32-bitna enota za operacije v plavajoči vejici&lt;br/&gt;2^11 32-bitnih operacij (na koncu uporabljalo za to, “number crunching”)&lt;br/&gt;teoretična zmogljivost = 20GFLOPS&lt;/p&gt;
&lt;p&gt;Vsi paralelni računalniki so za paralelne probleme, ne dobri za vse.&lt;/p&gt;
&lt;h2 id="prevlada-mimd"&gt;Prevlada MIMD&lt;/h2&gt;
&lt;p&gt;Po leti ~1990 so SIMD računalniki izginili, a zakaj?&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Razvoj posebnih elementov za dano arhitekturo, saj je vsak SIMD je narejen drugače.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Razširljivost&lt;/em&gt; (scalability), kjer uporabnik želi velikost svojega računalnika prilagajati problemu, ki ga rešuje.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Razvoj tehnologije&lt;/em&gt;, saj ko so splošno namenski procesorji (off-the-shelf componenets) postali tako zmogljivi kot posebej narejeni in se ni več izplačalo delati SIMD.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Računalniku nič ne škodi, če deluje na MIMD način, le bolj splošen je. Tudi pri MIMD srečamo shared in distributed memory.&lt;/p&gt;
&lt;p&gt;Problemi, ki se rešujejo običajno še vedno izkoriščajo operandno paralelnost, torej kot SIMD, a zmorejo tudi več.&lt;/p&gt;
&lt;h3 id="pregled"&gt;Pregled&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Date&lt;/strong&gt;: 2014-03-17&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SISD&lt;/li&gt;
&lt;li&gt;Superskalarni&lt;/li&gt;
&lt;li&gt;Tomasulo algoritem&lt;/li&gt;
&lt;li&gt;Vektorski&lt;/li&gt;
&lt;li&gt;SIMD&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="mimd-računalniki"&gt;MIMD računalniki&lt;/h2&gt;
&lt;p&gt;Zakaj so izginili SIMD?&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;em&gt;Tehnološki razvoj&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;po letu ~1990 so “standardni” (off-the-shelf) procesorji postali dovolj zmogljivi.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Razširljivost&lt;/em&gt; SIMD je slaba
&lt;ul&gt;
&lt;li&gt;povečevanje zmogljivosti z dodajanjem procesorjev&lt;/li&gt;
&lt;li&gt;povečanje zakasnitev (latence)&lt;/li&gt;
&lt;li&gt;povečevanje cene&lt;/li&gt;
&lt;li&gt;fizična velikost&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/arhitektura-mimd-racunalnikov.jpg"&gt;&lt;img alt="Arhitektura MIMD računalnikov" height="452" src="http://gw.tnode.com/student/img/arhitektura-mimd-racunalnikov.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Arhitektura MIMD računalnikov&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Osnovni vrsti MIMD:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;em&gt;Skupen pomnilniški prostor&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;tesno sklopljen sistem&lt;/li&gt;
&lt;li&gt;shared memory&lt;/li&gt;
&lt;li&gt;UMA (uniform memory access)&lt;/li&gt;
&lt;li&gt;vsak procesor &lt;span class="math"&gt;\(P_i\)&lt;/span&gt; ima svoj predpomnilnik &lt;span class="math"&gt;\(C_i\)&lt;/span&gt;, ki so povezani v skupno povezovalno mrežo, le-ta pa na pomnilnik &lt;span class="math"&gt;\(M_i\)&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Več pomnilniških prostorov&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;rahlo sklopljen sistem&lt;/li&gt;
&lt;li&gt;message passing&lt;/li&gt;
&lt;li&gt;distributed memory&lt;/li&gt;
&lt;li&gt;NUMA (non-uniform memory access)&lt;/li&gt;
&lt;li&gt;vsaj procesor &lt;span class="math"&gt;\(P_i\)&lt;/span&gt; ima svoj predpomnilnik &lt;span class="math"&gt;\(C_i\)&lt;/span&gt; in pomnilnik &lt;span class="math"&gt;\(M_i\)&lt;/span&gt;, več pomnilniških prostorov, ki so šele potem povezani v povezovalno mrežo&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="zgled-tianhe-2"&gt;Zgled: Tianhe-2&lt;/h3&gt;
&lt;p&gt;Lestvica TOP500 (teoretična zmogljivost):&lt;/p&gt;
&lt;p&gt;2010 Tianhe-1A 4,7 PFLOPS 4,1 MW debelo drevo&lt;br/&gt;2011 K computer 11,3 PFLOPS 12,6 MW “6D” torus (3D + dodatki)&lt;br/&gt;2012 Titan-Cray XK7 27,1 PFLOPS 8,3 MW 3D torus&lt;br/&gt;2013 Tianhe-2 54,9 PFLOPS 17,8 MW debelo drevo (+7 MW hlajenje)&lt;/p&gt;
&lt;p&gt;Zgradba:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;plošča (board):
&lt;ul&gt;
&lt;li&gt;2 modula:
&lt;ul&gt;
&lt;li&gt;CPM: 2x Ivy Bridge Xeon + 1x Xeon Phi)&lt;/li&gt;
&lt;li&gt;APU (accelerated proc. unit): 5x Xeon Phi&lt;/li&gt;
&lt;li&gt;mreža: 2x Gigabit LAN + TH-Express 2&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;2 vozlišči/ploščo: 64 GB DRAM (Ivy Bridge) + 24 GB DRAM (Xeon Phi) = 88 GB/ploščo&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;frame: 32 boards&lt;/li&gt;
&lt;li&gt;cabinet (rack): 4 frames = 128 boards&lt;/li&gt;
&lt;li&gt;computer: 125 cabinets (+ 13 za mrežo + 24 storage (diski))&lt;/li&gt;
&lt;li&gt;vodno hlajenje&lt;/li&gt;
&lt;li&gt;cena: ~500 M$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Heterogeni procesorji:&lt;/p&gt;
&lt;p&gt;32000x Ivy Bridge Xeon E5-2692 2,2 GHz 12 jeder 211 GFLOPS/čip 115 W&lt;br/&gt;48000x Xeon Phi 3151P 1,1 GHz 57 jeder 1003 GFLOPS/čip 270 W&lt;br/&gt;Skupno jeder: &lt;span class="math"&gt;\(32000 * 12 + 48000 * 57 = 3120000 jeder\)&lt;/span&gt; (procesorjev)&lt;br/&gt;Teoretična hitrost: &lt;span class="math"&gt;\(32000 * 0,211 + 48000 * 1,003 = 54896 PFLOPS\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Diski: 12,4 PB, H2FS hierarhija&lt;/p&gt;
&lt;p&gt;Mreža: debelo drevo&lt;/p&gt;
&lt;h3 id="primerjava-gridcloud-computing"&gt;Primerjava: Grid/cloud computing&lt;/h3&gt;
&lt;p&gt;Folding@home 2011 6 PFLOPS&lt;br/&gt;Folding@home 2013 12 PFLOPS&lt;br/&gt;SETI@home 2013 15,4 PFLOPS&lt;br/&gt;BOINC 2013 9,2 PFLOPS&lt;br/&gt;Bitcoin 2014 414,5 EFLOPS&lt;/p&gt;
&lt;p&gt;Majhna količina/potreba medprocesorske komunikacije, a takšnih primerov je zelo malo.&lt;/p&gt;
&lt;p&gt;Cluster (gruča) je skupek istih računalniko povezanih skupaj.&lt;/p&gt;
&lt;h3 id="primerjava-gpu-computing"&gt;Primerjava: GPU computing&lt;/h3&gt;
&lt;p&gt;Računanje na grafičnih procesorjih (MIMD). Prvotno specializirani za operacije za grafiko, dandanes imamo standard CUDA za lažjo uporabo.&lt;/p&gt;
&lt;p&gt;Intelov odgovor na to je koprocesor npr. Xeon Phi, ki je združljiv in uporablja skoraj enake ukaze kot prej. Zmoljivost se drastično poveča: 16 FLOPS/cycle/core = 1003 GLOPS/core, 2x večja poraba, a hitrost 5x večja.&lt;/p&gt;
&lt;p&gt;Kaj bo prevladalo? Koprocesorji ali GPU? Bomo videli.&lt;/p&gt;
&lt;h3 id="energetska-učinkovitost"&gt;Energetska učinkovitost&lt;/h3&gt;
&lt;p&gt;Tianhe-2 ~ &lt;span class="math"&gt;\(3 GFLOPS/W\)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Lestvica Green500:&lt;/p&gt;
&lt;p&gt;nov. 2013 4,503 GFLOPS/W 28 kW&lt;br/&gt;           3,632 GFLOPS/W 52 kW&lt;br/&gt;           3,518 GFLOPS/W 78 kW&lt;/p&gt;
&lt;p&gt;Kako narediti ExaFlops (&lt;span class="math"&gt;\(1000 PFLOPS\)&lt;/span&gt;) računalnik?&lt;/p&gt;
&lt;p&gt;Problem za segrevanje je število preklopov, ker pri zniževanju napetosti najprej hitrost pade, pod &lt;span class="math"&gt;\(0.7 V\)&lt;/span&gt; tranzistorji nehajo delati. Sodobni procesorji prilagajajo napetost/hitrost glede na obremenitev, temperaturo, in podobne faktorje.&lt;/p&gt;
&lt;p&gt;V laboratorijih že imajo tranzistorje na drugačnih osnovah (&lt;span class="math"&gt;\(0.3 V\)&lt;/span&gt;), a daleč od produkcije.&lt;/p&gt;
&lt;h2 id="podatkovno-pretokovni-računalniki-dataflow"&gt;Podatkovno pretokovni računalniki (dataflow)&lt;/h2&gt;
&lt;p&gt;Obravnavali smo von Neumannove računalnike na katerih so od leta 1945 zasnovani vsi računalniki. Von Neumannov model je ukazno pretokovni (control flow) (strogo zaporedje: prevzem ukaza, izvršitev ukaza). Operand je točno določen z naslovom (oz. virtualnim naslovom).&lt;/p&gt;
&lt;p&gt;Podatkovno pretokovni uporablja popolnoma drugačen pristop (ideja ~1950-ih, več pogovarjali v obdobju ~1960-1980). Problem lahko predstavimo kot graf operacij, npr:&lt;/p&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
a = (b + 1) * (b - c)
\]&lt;/span&gt;&lt;/p&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/izvajanje-dataflow-racunalnikov.jpg"&gt;&lt;img alt="Izvajanje dataflow računalnikov" height="452" src="http://gw.tnode.com/student/img/izvajanje-dataflow-racunalnikov.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Izvajanje dataflow računalnikov&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;ol type="1"&gt;
&lt;li&gt;operacijski paket&lt;/li&gt;
&lt;li&gt;podatkovni žeton (token)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span class="math"&gt;\[
\gamma: + (b) (1) \beta/1
\alpha: - (b) (c) \beta/2
\beta: * (\beta/1) (\beta/2) \delta/1=a
\]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Podatkovni žetoni gredo v operacijske pakete na ustrezna mesta, njihov rezultati gredo naprej na ustrezna mesta v druge operacijske pakete.&lt;/p&gt;
&lt;p&gt;Delovanje:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;em&gt;Data driven&lt;/em&gt;: operacija se izvrši, ko so prisotni vsi vhodni podatki&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Demand driven&lt;/em&gt;: enako kot 1. + operacijski paket potrebuje rezultat&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Operacije:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;funkcija &lt;span class="math"&gt;\(f\)&lt;/span&gt;, (krog, več vhodov, en izhod)&lt;/li&gt;
&lt;li&gt;odločitev, (diamant, več vhodov, en izhod)&lt;/li&gt;
&lt;li&gt;vrata &lt;span class="math"&gt;\(T\)&lt;/span&gt;, spusti skozi, ko true, (krog s puščico, en vhod, en izhod)&lt;/li&gt;
&lt;li&gt;preklop &lt;span class="math"&gt;\(T/F\)&lt;/span&gt;, if stavek izbere med dvema vhodoma, (krog s puščico, dva vhoda, en izhod)&lt;/li&gt;
&lt;li&gt;preklop &lt;span class="math"&gt;\(T/F\)&lt;/span&gt;, if stavek izbere izhod, (krog s puščico, en vhod, dva izhoda)&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/operacije-dataflow-racunalnikov.jpg"&gt;&lt;img alt="Operacije dataflow računalnikov" height="452" src="http://gw.tnode.com/student/img/operacije-dataflow-racunalnikov.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Operacije dataflow računalnikov&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Tipi:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;statični podatkovno pretokovni rač. (1974 J. Damis):
&lt;ul&gt;
&lt;li&gt;na loku grafa je lahko največ 1 žeton&lt;/li&gt;
&lt;li&gt;operacija se izvrši, če so prisotni vhodni operandi + prazen lok&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;dinamični podatkovno pretokovni rač. (1982 Manchester; 1983 Amind, MIT:
&lt;ul&gt;
&lt;li&gt;na loku grafa je lahko več žetonov&lt;/li&gt;
&lt;li&gt;žetoni morajo biti oštevilčeni (obarvani)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Napake glede naslovov se ne morejo zgoditi, saj pojem naslova ne obstaja in ga po pomoti ni možno pokvariti (eg. stack overflow). V grafih nobene omejitve glede zaporednosti.&lt;/p&gt;
&lt;p&gt;Kako naredimo takšen računalnik? (podobno je paketni mreži)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;gosteči računalnik (zaradi lažje implementacije)&lt;/li&gt;
&lt;li&gt;V/I stikalo nadzira gosteči računalnik&lt;/li&gt;
&lt;li&gt;vsakemu podatku se glede na vrsto priredi podatkovni žeton&lt;/li&gt;
&lt;li&gt;primerjalna enota vsebuje program oz. operacijske pakete, čaka na ustrezne žetone&lt;/li&gt;
&lt;li&gt;operacijski paketi z vsemi žetoni nato potujejo v pomnilnik oz. čakalno vrsto&lt;/li&gt;
&lt;li&gt;le-ti gredo nato v množico procesorjev iz katere pridejo žetoni rezultatov, ki spet po potrebi zaokrožijo&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/student/img/arhitektura-dataflow-racunalnikov.jpg"&gt;&lt;img alt="Arhitektura dataflow računalnikov" height="452" src="http://gw.tnode.com/student/img/arhitektura-dataflow-racunalnikov.jpg" width="600"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Arhitektura dataflow računalnikov&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Problemi:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;zmogljivost primerjalne enote, število primerjalnikov v malo večjem asociativnem predpomnilniku zelo naraste&lt;/li&gt;
&lt;li&gt;paketna mreža ima enake probleme kot vse mreže, so lahko ozke mreže in procesorji lahko čakajo&lt;/li&gt;
&lt;li&gt;velika “poraba” pomnilnika, vsaka spremenljivka ima lahko v nekem trenutku veliko število žetonov (npr. operacije z matrikami, kjer lahko iz ene vrednosti nastane več)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Takih računalnikov na trgu ni, razen za posebne namene. Kljub temu, da je ideja sicer zanimiva in nekaterih težav z njimi ni (stroge zaporednosti, segmentation fault), se niso obnesli, saj niso bolj zmogljivi.&lt;/p&gt;
&lt;p&gt;Ostali:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;analogni računalniki (z žicami, 1% natančnost), hibridni (lažje programirat)&lt;/li&gt;
&lt;li&gt;nevronske mreže (pripravijo na PCju, lahko specializirana vezja), primerni za določene primere (npr. razpoznavanje obrazov)&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="student"></category></entry><entry><title>Research tool nets-nodegroups</title><link href="http://gw.tnode.com/nets-nodegroups/" rel="alternate"></link><updated>2015-07-08T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2014-02-03:nets-nodegroups/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Node group structures" height="200" src="http://gw.tnode.com/nets-nodegroups/img/node-group-structures.jpg" width="336"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;Research tool &lt;a href="http://gw.tnode.com/nets-nodegroups/"&gt;&lt;strong&gt;&lt;em&gt;nets-nodegroups&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; implements the &lt;strong&gt;node group extraction framework&lt;/strong&gt; for network analysis and introduces the &lt;strong&gt;group type parameter Tau&lt;/strong&gt; for researching and exploring node group structures of various networks. The framework is capable of identifying and sequentially extracting significant node group structures, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;communities&lt;/li&gt;
&lt;li&gt;modules&lt;/li&gt;
&lt;li&gt;core/periphery&lt;/li&gt;
&lt;li&gt;hubs &amp;amp; spokes&lt;/li&gt;
&lt;li&gt;and many similar structures&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Details of the algorithm and framework are published in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;L. Šubelj, N. Blagus, and M. Bajec, “Group extraction for real-world networks: The case of communities, modules, and hubs and spokes,” in Proc. of NetSci ’13, 2013, p. 152.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;L. Šubelj, S. Žitnik, N. Blagus, and M. Bajec, “Node mixing and group structure of complex software networks,” Advs. Complex Syst., vol. 17, 2014.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The adopted network node group extraction framework extracts groups from a simple undirected graph sequentially. An optimization method (currently random-restart hill climbing) is used to maximize the group criterion &lt;em&gt;W(S,T)&lt;/em&gt; and extract group &lt;em&gt;S&lt;/em&gt; with the corresponding linking pattern &lt;em&gt;T&lt;/em&gt;. After extraction edges between &lt;em&gt;S&lt;/em&gt; and &lt;em&gt;T&lt;/em&gt; are removed and the whole process repeated on the largest weakly-connected component until the group criterion &lt;em&gt;W&lt;/em&gt; is larger than expected on a Erdös-Rényi random graph.&lt;/p&gt;
&lt;p&gt;Open source project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-home"&gt;&lt;/i&gt; home: &lt;a href="http://gw.tnode.com/nets-nodegroups/"&gt;http://gw.tnode.com/nets-nodegroups/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-github-square"&gt;&lt;/i&gt; github: &lt;a href="http://github.com/gw0/nets-nodegroups/"&gt;http://github.com/gw0/nets-nodegroups/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-laptop"&gt;&lt;/i&gt; technology: &lt;em&gt;C++&lt;/em&gt;, &lt;em&gt;SNAP&lt;/em&gt; library&lt;/li&gt;
&lt;li&gt;&lt;i class="fa fa-fw fa-bookmark-o"&gt;&lt;/i&gt; citation: &lt;a href="http://dx.doi.org/10.5281/zenodo.11589"&gt;&lt;img alt="DOI:10.5281/zenodo.11589" src="http://zenodo.org/badge/doi/10.5281/zenodo.11589.png"/&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="usage"&gt;Usage&lt;/h2&gt;
&lt;p&gt;First compile the tool to get the executable &lt;code&gt;nodegroups&lt;/code&gt; (see below). To use it you need your file with graph edges and specify parameters you want to tweak:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-o&lt;/code&gt;: prefix for all file names (simplified usage) (default: &lt;code&gt;graph&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-i&lt;/code&gt;: input file with graph edges (undirected edge per line) (default: &lt;code&gt;graph.edgelist&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-l&lt;/code&gt;: input file (optional) with node labels (node ID, node label) (default: &lt;code&gt;graph.labels&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-og&lt;/code&gt;: output file with ST-group assignments (default: &lt;code&gt;graph.groups&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-os&lt;/code&gt;: output file with only ST-group extraction summary (default: &lt;code&gt;graph.groupssum&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-n&lt;/code&gt;: number of restarts of the optimization algorithm (default: &lt;em&gt;2000&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-sm&lt;/code&gt;: maximal number of steps in each optimization run (default: &lt;em&gt;100000&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-sw&lt;/code&gt;: stop optimization if no W improvement in steps (default: &lt;em&gt;1000&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-ss&lt;/code&gt;: initial random-sample size of S ant T (&lt;em&gt;0&lt;/em&gt;=random) (default: &lt;em&gt;1&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-fn&lt;/code&gt;: finish after extracting so many groups (turn off random graphs) (default: &lt;em&gt;0&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-fw&lt;/code&gt;: finish if W smaller than top percentile on random graphs (default: &lt;em&gt;1&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-rg&lt;/code&gt;: random graphs (Erdos-Renyi) to construct for estimating W (default: &lt;em&gt;500&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-rn&lt;/code&gt;: random graph restarts of the optimization algorithm (default: &lt;em&gt;10&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-rf&lt;/code&gt;: random graph re-estimation of W if relative difference smaller (default: &lt;em&gt;inf&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example to read from &lt;code&gt;graph.edgelist&lt;/code&gt; and output to &lt;code&gt;graph.groups&lt;/code&gt; and &lt;code&gt;graph.groupsreport&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;./nodegroups&lt;/span&gt; -o:graph&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same example but extract only first 12 groups (ignoring estimated W on random graphs):&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;./nodegroups&lt;/span&gt; -o:graph -fn:12&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Input file &lt;code&gt;graph.edgelist&lt;/code&gt; contains undirected graph edges (&lt;code&gt;-i:&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0 1
0 2
1 2
2 3
3 4
3 5
...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output file &lt;code&gt;graph.groups&lt;/code&gt; contains extracted node groups &lt;em&gt;S&lt;/em&gt; and linking patterns &lt;em&gt;T&lt;/em&gt; (&lt;code&gt;-og:&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# NId GroupS GroupT NLabel
0     0      0      foo
1     0      0      bar
2     0      0      foobar
3     1      -1     -
2     -1     1      -
...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output file &lt;code&gt;graph.groupsreport&lt;/code&gt; contains a summary of extracted node groups (&lt;code&gt;-or:&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Graphs: 12  Nodes: 115  Edges: 613
N   M   N_S M_S N_T M_T N_ST M_ST L_ST L_STc W        Tau    Mod_S  Mod_T  Type
115 613 9   36  9   36  9    36   72   25    823.0000 1.0000 0.1352 0.1352 COM
115 582 9   36  9   36  9    36   72   30    818.0000 1.0000 0.1164 0.1164 COM
115 550 10  40  10  40  10   40   80   30    810.0000 1.0000 0.1191 0.1191 COM
...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Description of columns:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;N&lt;/code&gt;: number of nodes left in graph&lt;/li&gt;
&lt;li&gt;&lt;code&gt;M&lt;/code&gt;: number of edges left in graph&lt;/li&gt;
&lt;li&gt;&lt;code&gt;N_S&lt;/code&gt;: number of nodes in subgraph on group &lt;em&gt;S&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;M_S&lt;/code&gt;: number of edges in subgraph on group &lt;em&gt;S&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;N_T&lt;/code&gt;: number of nodes in subgraph on linking pattern &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;M_T&lt;/code&gt;: number of edges in subgraph on linking pattern &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;N_ST&lt;/code&gt;: number of nodes in subgraph on intersection of &lt;em&gt;S&lt;/em&gt; and &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;M_ST&lt;/code&gt;: number of edges in subgraph on intersection of &lt;em&gt;S&lt;/em&gt; and &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;L_ST&lt;/code&gt;: number of edges &lt;em&gt;L(S,T)&lt;/em&gt; between groups &lt;em&gt;S&lt;/em&gt; and &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;L_STc&lt;/code&gt;: number of edges &lt;em&gt;L(S,Tc)&lt;/em&gt; between groups &lt;em&gt;S&lt;/em&gt; and complement of &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;W&lt;/code&gt;: group critetion &lt;em&gt;W(S,T)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Tau&lt;/code&gt;: group type parameter &lt;em&gt;Tau(S,T)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Mod_S&lt;/code&gt;: modularity measure on group &lt;em&gt;S&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Mod_T&lt;/code&gt;: modularity measure on linking pattern &lt;em&gt;T&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Type&lt;/code&gt;: human name for group type parameter &lt;em&gt;Tau&lt;/em&gt;:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;COM&lt;/code&gt;: community (&lt;em&gt;S = T&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;MOD&lt;/code&gt;: module (&lt;em&gt;S intersection with T = 0&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;HSD&lt;/code&gt;: hub&amp;amp;spokes module (module and &lt;em&gt;|T| = 1&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;MIX&lt;/code&gt;: mixture (otherwise)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CPX&lt;/code&gt;: core/periphery mixture (&lt;em&gt;S subset of T&lt;/em&gt; or &lt;em&gt;T subset of S&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Exit status codes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-1&lt;/code&gt;: error, file not found or crash&lt;/li&gt;
&lt;li&gt;&lt;code&gt;0&lt;/code&gt; : success, groups extracted&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1&lt;/code&gt; : no groups extracted&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="build"&gt;Build&lt;/h2&gt;
&lt;p&gt;You will need to compile the source code to get a working tool. This should work on any platform using any standard &lt;em&gt;C++&lt;/em&gt; compiler (eg. &lt;em&gt;GCC&lt;/em&gt;, &lt;em&gt;Visual Studio&lt;/em&gt;), but it has only been extensively tested in the following environment.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.debian.org/"&gt;&lt;em&gt;Debian&lt;/em&gt;&lt;/a&gt; &lt;small&gt;(7.3, 8.0)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;build-essential&lt;/em&gt; &lt;small&gt;(~11.5)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;g++&lt;/em&gt; &lt;small&gt;(~4:4.7.2-1)&lt;/small&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://github.com/snap-stanford/snap/"&gt;&lt;em&gt;SNAP&lt;/em&gt;&lt;/a&gt; &lt;small&gt;(~2.1)&lt;/small&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First you need to &lt;strong&gt;download&lt;/strong&gt; &lt;a href="http://github.com/gw0/nets-nodegroups/"&gt;source code of &lt;em&gt;net-nodegroups&lt;/em&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;git&lt;/span&gt; clone http://github.com/gw0/nets-nodegroups.git
$ &lt;span class="kw"&gt;cd&lt;/span&gt; ./nets-nodegroups&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Download&lt;/strong&gt; and compile &lt;a href="http://github.com/snap-stanford/snap/"&gt;&lt;em&gt;SNAP&lt;/em&gt; library&lt;/a&gt; from console using &lt;em&gt;GCC&lt;/em&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;apt-get&lt;/span&gt; install build-essential g++
$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://snap.stanford.edu/releases/Snap-2.1.zip
$ &lt;span class="kw"&gt;unzip&lt;/span&gt; Snap-2.1.zip
$ &lt;span class="kw"&gt;mv&lt;/span&gt; Snap-2.1 snap
$ &lt;span class="kw"&gt;cd&lt;/span&gt; ./snap
$ &lt;span class="kw"&gt;make&lt;/span&gt; all
$ &lt;span class="kw"&gt;cd&lt;/span&gt; ..&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Compile &lt;em&gt;net-nodegroups&lt;/em&gt;&lt;/strong&gt; from console using &lt;em&gt;GCC&lt;/em&gt;:&lt;/p&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;cd&lt;/span&gt; ./src
$ &lt;span class="kw"&gt;make&lt;/span&gt; all&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Executable file is located in &lt;code&gt;./src/nodegroups&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="feedback"&gt;Feedback&lt;/h2&gt;
&lt;p&gt;If you encounter any bugs or have feature requests, please file them in the &lt;a href="http://github.com/gw0/nets-nodegroups/issues/"&gt;issue tracker&lt;/a&gt;, or even develop it yourself and submit a pull request on &lt;a href="http://github.com/gw0/nets-nodegroups/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="license"&gt;License&lt;/h2&gt;
&lt;p&gt;Copyright © 2014-15 &lt;em&gt;gw0&lt;/em&gt; [&lt;a href="http://gw.tnode.com/"&gt;http://gw.tnode.com/&lt;/a&gt;] &amp;lt;&lt;script type="text/javascript"&gt;
&lt;!--
h='&amp;#116;&amp;#110;&amp;#x6f;&amp;#100;&amp;#x65;&amp;#46;&amp;#x63;&amp;#x6f;&amp;#x6d;';a='&amp;#64;';n='&amp;#x67;&amp;#x77;&amp;#46;&amp;#50;&amp;#48;&amp;#x31;&amp;#x35;';e=n+a+h;
document.write('&lt;a h'+'ref'+'="ma'+'ilto'+':'+e+'"&gt;'+e+'&lt;\/'+'a'+'&gt;');
// --&gt;
&lt;/script&gt;&lt;noscript&gt;gw.2015 at tnode dot com&lt;/noscript&gt;&amp;gt;&lt;/p&gt;
&lt;p&gt;This code is licensed under the &lt;a href="LICENSE_AGPL-3.0.txt"&gt;GNU Affero General Public License 3.0+&lt;/a&gt; (&lt;em&gt;AGPL-3.0+&lt;/em&gt;). Note that it is mandatory to make all modifications and complete source code publicly available to any user.&lt;/p&gt;
</summary><category term="network analysis"></category><category term="tool"></category></entry><entry><title>Darky's ROM on Galaxy S i9000</title><link href="http://gw.tnode.com/android/darkys-rom-on-galaxy-s-i9000/" rel="alternate"></link><updated>2012-08-07T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2011-02-08:android/darkys-rom-on-galaxy-s-i9000/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Darky's rom logo" height="75" src="http://gw.tnode.com/android/img/darkys-rom-logo.jpg" width="265"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Darky’s ROM 9.2/10.2&lt;/em&gt;&lt;/strong&gt; is a custom ROM based on Android 2.2.1 and intended for phone &lt;em&gt;Samsung Galaxy S&lt;/em&gt; (GT-I9000XXJPY). Here are brief instructions related to installing, hacking, and improving your mobile phone.&lt;/p&gt;
&lt;h2 id="flashing"&gt;Flashing&lt;/h2&gt;
&lt;h3 id="full-backup-from-linux"&gt;Full backup from Linux&lt;/h3&gt;
&lt;p&gt;It is always recommended to do a full backup of the original firmware images before you start doing anything. To do this on a brand new &lt;em&gt;Samsung Galaxy S&lt;/em&gt; with &lt;em&gt;Android Recovery &amp;lt;3e&amp;gt;&lt;/em&gt; over USB without flashing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;if you plan to give images to someone else, do a factory reset&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;connect over USB&lt;/strong&gt; in debugging mode and check if &lt;code&gt;adb&lt;/code&gt; is working (part of &lt;a href="http://developer.android.com/sdk/index.html"&gt;&lt;em&gt;Android SDK Tools&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;download and extract&lt;/strong&gt; &lt;a href="http://gw.tnode.com/android/f/darky-rom-root-backup-clean_20110209.tgz"&gt;root, backup, and clean scripts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;root it&lt;/strong&gt; with &lt;kbd&gt;./doroot.sh&lt;/kbd&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;make full backup&lt;/strong&gt; images of all partitions with &lt;kbd&gt;./dobackup.sh&lt;/kbd&gt;&lt;/li&gt;
&lt;li&gt;check if everything is fine and all images are present&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="flashing-darkys-rom-10.2-from-linux"&gt;Flashing Darky’s ROM 10.2 from Linux&lt;/h3&gt;
&lt;p&gt;Follow this instructions to flash a clean &lt;em&gt;Darky’s ROM 10.2&lt;/em&gt; on a &lt;em&gt;Samsung Galaxy S&lt;/em&gt; from Linux:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;if your ROM uses lagfixes, disable them&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;connect over USB&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;install&lt;/strong&gt; &lt;a href="http://glassechidna.com.au/heimdall/"&gt;&lt;em&gt;heimdall&lt;/em&gt;&lt;/a&gt; tool (note to never use &lt;em&gt;heimdall&lt;/em&gt; dump mode)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;download and extract&lt;/strong&gt; &lt;a href="http://sourceforge.net/projects/ficeto.u/files/DarkyROM_10.2_Resurrection.zip/download"&gt;&lt;em&gt;Darky’s ROM 10.2 Ressurection Edition&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;also &lt;strong&gt;extract&lt;/strong&gt; &lt;code&gt;PDA.tar.md5&lt;/code&gt; inside archive (&lt;code&gt;tar vxf PDA.tar.md5&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;reboot into download mode&lt;/strong&gt; (turn off and press &lt;kbd&gt;volume up + home + power&lt;/kbd&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;flash&lt;/strong&gt; with &lt;em&gt;heimdall&lt;/em&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;heimdall&lt;/span&gt; flash --repartition --pit s1_odin_20100512.pit --factoryfs factoryfs.rfs --cache cache.rfs --dbdata dbdata.rfs --primary-boot boot.bin --secondary-boot Sbl.bin --param param.lfs --modem modem.bin --kernel zImage&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;wait for flashing and installation to finish&lt;/li&gt;
&lt;li&gt;reboot once more&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In Windows a similar tool for flashing the &lt;em&gt;Samsung Galaxy S&lt;/em&gt; is called &lt;a href="http://odindownload.com/"&gt;&lt;em&gt;Odin&lt;/em&gt;&lt;/a&gt; (it needs files packed in a .tar, PDA).&lt;/p&gt;
&lt;h3 id="replace-boot-animation-and-disable-boot-sound"&gt;Replace boot animation and disable boot sound&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Darky’s ROM&lt;/em&gt; has a custom boot animation with sound that can be annoying. Instructions to replace the boot animation from Linux:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;connect over USB&lt;/strong&gt; in debugging mode and check if &lt;code&gt;adb&lt;/code&gt; is working (part of &lt;a href="http://developer.android.com/sdk/index.html"&gt;&lt;em&gt;Android SDK Tools&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;download&lt;/strong&gt; &lt;a href="http://www.mediafire.com/?1v6aj8wrsa2nsal"&gt;Nexus S boot animation&lt;/a&gt; (or another compatible animation)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;replace boot animation&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell mount -o remount,rw /dev/block/stl9 /system
$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell mv /system/media/bootanimation.zip /system/media/bootanimation.zip.old
$ &lt;span class="kw"&gt;adb&lt;/span&gt; push bootanimation.zip /system/media/bootanimation.zip
$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell chmod 644 /system/media/bootanimation.zip&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;disable boot sound&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell mv /system/etc/PowerOn.wav /system/etc/PowerOn.wav.old
$ &lt;span class="kw"&gt;adb&lt;/span&gt; shell mv /system/darkysound/android_audio.mp3 /system/darkysound/android_audio.mp3.old&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="clean-darkys-rom"&gt;Clean Darky’s ROM&lt;/h3&gt;
&lt;p&gt;Unfortunately &lt;em&gt;Darky’s ROM 9.2/10.2&lt;/em&gt; contains a couple of useless or suspicious apps. Some of them can be uninstalled with the normal approach (under &lt;em&gt;Settings&lt;/em&gt;/&lt;em&gt;Apps&lt;/em&gt; menu):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Voodoo Control App&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;FasterFix&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More permanent apps must be removed using ADB:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;connect over USB&lt;/strong&gt; in debugging mode and check if &lt;code&gt;adb&lt;/code&gt; is working (part of &lt;a href="http://developer.android.com/sdk/index.html"&gt;&lt;em&gt;Android SDK Tools&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;remove apps&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'mount -o remount,rw /dev/block/stl9 /system'&lt;/span&gt;
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell mkdir /sdcard/device-files
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'cp /system/app/Fileindex.apk /sdcard/device-files/ &amp;amp;amp;&amp;amp;amp; rm /system/app/Fileindex.apk'&lt;/span&gt;
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'cp /system/app/PressReader.apk /sdcard/device-files/ &amp;amp;amp;&amp;amp;amp; rm /system/app/PressReader.apk'&lt;/span&gt;
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'cp /system/app/Samsung.apk /sdcard/device-files/ &amp;amp;amp;&amp;amp;amp; rm /system/app/Samsung.apk'&lt;/span&gt;
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'cp /system/app/Telegraaf.apk /sdcard/device-files/ &amp;amp;amp;&amp;amp;amp; rm /system/app/Telegraaf.apk'&lt;/span&gt;
$ &lt;span class="kw"&gt;./adb&lt;/span&gt; shell su -c &lt;span class="st"&gt;'cp /system/app/txtr-android-client-bol-1.0.3-nobooks.apk /sdcard/device-files/ &amp;amp;amp;&amp;amp;amp; rm /system/app/txtr-android-client-bol-1.0.3-nobooks.apk'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="other"&gt;Other&lt;/h2&gt;
&lt;h3 id="unlock-a-screen-locked-device"&gt;Unlock a screen locked device&lt;/h3&gt;
&lt;p&gt;If you were playing around too much with the pattern unlock screen on an Android phone, it will permanently lock. You then either need to unlock it with your Google account or using ADB (if you have USB debugging enabled):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;connect over USB&lt;/strong&gt; in debugging mode and check if &lt;code&gt;adb&lt;/code&gt; is working (part of &lt;a href="http://developer.android.com/sdk/index.html"&gt;&lt;em&gt;Android SDK Tools&lt;/em&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;install&lt;/strong&gt; &lt;em&gt;sqlite3&lt;/em&gt; tool on the phone or computer (in case it is not yet on your phone)&lt;/li&gt;
&lt;li&gt;for Android versions with &lt;em&gt;sqlite3&lt;/em&gt; on the phone itself, you can do the following over &lt;code&gt;./adb shell&lt;/code&gt; on file &lt;code&gt;/dbdata/databases/com.android.providers.settings/settings.db&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;for Android versions without &lt;em&gt;sqlite3&lt;/em&gt;, an ugly solution is to download the problematic settings file &lt;code&gt;./adb pull /dbdata/databases/com.android.providers.settings/settings.db&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;open settings&lt;/strong&gt; with &lt;code&gt;./sqlite settings.db&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;update security lock setting&lt;/strong&gt; &lt;code&gt;update secure set value=0 where name='lockscreen.lockedoutpermanently';&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;close settings&lt;/strong&gt; &lt;code&gt;.quit&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;in case you downloaded it, upload the settings file &lt;code&gt;./adb push settings.db /dbdata/databases/com.android.providers.settings/settings.db&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;to increase the number of failed login attempts before locking &lt;code&gt;./adb pull /data/system/device_policies.xml&lt;/code&gt;, modify the value in the XML file and upload it back&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.darkyrom.com/"&gt;http://www.darkyrom.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://forum.xda-developers.com/wiki/index.php?title=Samsung_Galaxy_S_Series"&gt;http://forum.xda-developers.com/wiki/index.php?title=Samsung_Galaxy_S_Series&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://forum.xda-developers.com/showthread.php?t=939752"&gt;http://forum.xda-developers.com/showthread.php?t=939752&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://developer.android.com/sdk/index.html"&gt;http://developer.android.com/sdk/index.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="android"></category><category term="phone"></category><category term="backup"></category><category term="flash"></category><category term="hack"></category></entry><entry><title>Installation alternatives for Debian 8</title><link href="http://gw.tnode.com/debian/installation-alternatives-for-debian-8/" rel="alternate"></link><updated>2015-05-15T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2009-03-13:debian/installation-alternatives-for-debian-8/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Debian 8 logo" height="120" src="http://gw.tnode.com/debian/img/debian-8-logo.png" width="248"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="http://www.debian.org/"&gt;&lt;strong&gt;&lt;em&gt;Debian 8&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; and similar systems can be installed in many ways, but the alternative &lt;em&gt;Network boot&lt;/em&gt; variant is the smallest and cleanest, as it installs &lt;strong&gt;directly from the internet&lt;/strong&gt;. Depending on the selected method and architecture (&lt;code&gt;amd64&lt;/code&gt; is for 64-bit Intel/AMD) you need an appropriate &lt;a href="http://www.debian.org/distrib/"&gt;installation image&lt;/a&gt; or &lt;a href="http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/"&gt;special boot files&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="locally-with-network-boot"&gt;Locally with network boot&lt;/h2&gt;
&lt;h3 id="from-usb-stick"&gt;From USB stick&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty USB stick&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.1.0-amd64-netinst.iso"&gt;&lt;em&gt;Netboot ISO image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.1.0-amd64-netinst.iso&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;write image&lt;/strong&gt; to the USB stick (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; dd if=debian-8.1.0-amd64-netinst.iso of=/dev/sdX bs=16M&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;sync&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;reboot from the USB stick (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-cddvd"&gt;From CD/DVD&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty CD or DVD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.1.0-amd64-netinst.iso"&gt;&lt;em&gt;Netboot ISO image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.1.0-amd64-netinst.iso&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;burn image&lt;/strong&gt; (file &lt;code&gt;debian-8.1.0-amd64-netinst.iso&lt;/code&gt;) onto the CD or DVD (using &lt;em&gt;K3b&lt;/em&gt; or other CD/DVD burning utility)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reboot from the CD/DVD (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-hdd-using-a-special-image"&gt;From HDD using a special image&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty target HDD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/hd-media/boot.img.gz"&gt;&lt;em&gt;HD-media image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/hd-media/boot.img.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;write image&lt;/strong&gt; it to the target HDD (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;gzip&lt;/span&gt; -cd boot.img.gz &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;sudo&lt;/span&gt; dd of=/dev/sdX bs=16M&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;sync&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;&lt;p&gt;insert the HDD in the target machine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reboot from the HDD (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-hdd-using-grub1-or-grub2"&gt;From HDD using GRUB1 or GRUB2&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;strong&gt;download&lt;/strong&gt; &lt;a href="http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux"&gt;&lt;em&gt;Netboot kernel&lt;/em&gt;&lt;/a&gt; and &lt;a href="http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz"&gt;&lt;em&gt;Netboot initrd&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux
$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="2" type="1"&gt;
&lt;li&gt;&lt;strong&gt;mount&lt;/strong&gt; &lt;code&gt;/boot&lt;/code&gt; of the target HDD (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; mount /dev/sdX1 /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;copy kernel and initrd&lt;/strong&gt; to mounted &lt;code&gt;/boot&lt;/code&gt; of the target HDD&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; cp linux /mnt/netboot-linux
$ &lt;span class="kw"&gt;sudo&lt;/span&gt; cp initrd.gz /mnt/netboot-initrd.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;&lt;strong&gt;unmount&lt;/strong&gt; &lt;code&gt;/boot&lt;/code&gt; of the target HDD&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; umount /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="5" type="1"&gt;
&lt;li&gt;&lt;p&gt;reboot into &lt;em&gt;GRUB&lt;/em&gt; command line (press &lt;code&gt;c&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;for &lt;em&gt;GRUB1&lt;/em&gt;&lt;/strong&gt; enter this to boot&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;root (hd0,0)
kernel /netboot-linux priority=low
initrd /netboot-initrd.gz
boot&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="7" type="1"&gt;
&lt;li&gt;&lt;strong&gt;for &lt;em&gt;GRUB2&lt;/em&gt;&lt;/strong&gt; enter this to boot&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;set root=(hd0,msdos1)
linux /netboot-linux priority=low
initrd /netboot-initrd.gz
boot&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.debian.org/releases/stable/installmanual"&gt;http://www.debian.org/releases/stable/installmanual&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="remotely-with-debootstrap"&gt;Remotely with debootstrap&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Debian&lt;/em&gt; system and similar are so flexible that they can also be installed remotely over SSH on a machine running any version of Linux. But beware, that such procedures are extremely dangerous, fragile, and require expert knowledge. To prevent you from breaking your system, we won’t describe them here.&lt;/p&gt;
</summary><category term="debian"></category><category term="install"></category></entry><entry><title>Installation alternatives for Ubuntu 14.04 LTS</title><link href="http://gw.tnode.com/debian/installation-alternatives-for-ubuntu-14-04/" rel="alternate"></link><updated>2015-05-15T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2009-03-13:debian/installation-alternatives-for-ubuntu-14-04/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Ubuntu 14.04 logo" height="120" src="http://gw.tnode.com/debian/img/ubuntu-14-04-logo.png" width="267"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="http://www.ubuntu.com/"&gt;&lt;strong&gt;&lt;em&gt;Ubuntu 14.04 LTS&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; and similar systems can be installed in many ways, but the alternative &lt;em&gt;Network installer&lt;/em&gt; variant is the smallest and cleanest, as it installs &lt;strong&gt;directly from the internet&lt;/strong&gt;. Depending on the selected method and architecture (&lt;code&gt;amd64&lt;/code&gt; is for 64-bit Intel/AMD) you need an appropriate &lt;a href="http://www.ubuntu.com/download/"&gt;installation image&lt;/a&gt; or &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/"&gt;special boot files&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="locally-with-network-installer"&gt;Locally with network installer&lt;/h2&gt;
&lt;h3 id="from-usb-stick"&gt;From USB stick&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty USB stick&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/mini.iso"&gt;&lt;em&gt;Netboot ISO image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/mini.iso&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;write image&lt;/strong&gt; to the USB stick (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; dd if=mini.iso of=/dev/sdX bs=16M&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;sync&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;reboot from the USB stick (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-cddvd"&gt;From CD/DVD&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty CD or DVD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/mini.iso"&gt;&lt;em&gt;Netboot ISO image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/mini.iso&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;burn image&lt;/strong&gt; (file &lt;code&gt;mini.iso&lt;/code&gt;) onto the CD or DVD (using &lt;em&gt;K3b&lt;/em&gt; or other CD/DVD burning utility)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reboot from the CD/DVD (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-hdd-using-a-special-image"&gt;From HDD using a special image&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;insert an empty target HDD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;download&lt;/strong&gt; a &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/hd-media/boot.img.gz"&gt;&lt;em&gt;HD-media image&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/hd-media/boot.img.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;write image&lt;/strong&gt; to the target HDD (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;gzip&lt;/span&gt; -cd boot.img.gz &lt;span class="kw"&gt;|&lt;/span&gt; &lt;span class="kw"&gt;sudo&lt;/span&gt; dd of=/dev/sdX bs=16M&lt;span class="kw"&gt;;&lt;/span&gt; &lt;span class="kw"&gt;sync&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;&lt;p&gt;insert the HDD in the target machine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reboot from the HDD (optionally add kernel parameter &lt;code&gt;priority=low&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="from-hdd-using-grub1-or-grub2"&gt;From HDD using GRUB1 or GRUB2&lt;/h3&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;strong&gt;download&lt;/strong&gt; &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/linux"&gt;&lt;em&gt;Netboot kernel&lt;/em&gt;&lt;/a&gt; and &lt;a href="http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/initrd.gz"&gt;&lt;em&gt;Netboot initrd&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/linux
$ &lt;span class="kw"&gt;wget&lt;/span&gt; http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/initrd.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="2" type="1"&gt;
&lt;li&gt;&lt;strong&gt;mount&lt;/strong&gt; &lt;code&gt;/boot&lt;/code&gt; of the target HDD (device &lt;code&gt;/dev/sdX&lt;/code&gt;) as root&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; mount /dev/sdX1 /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="3" type="1"&gt;
&lt;li&gt;&lt;strong&gt;copy kernel and initrd&lt;/strong&gt; to mounted &lt;code&gt;/boot&lt;/code&gt; of the target HDD&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; cp linux /mnt/netboot-linux
$ &lt;span class="kw"&gt;sudo&lt;/span&gt; cp initrd.gz /mnt/netboot-initrd.gz&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;&lt;strong&gt;unmount&lt;/strong&gt; &lt;code&gt;/boot&lt;/code&gt; of the target HDD&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="sourceCode bash"&gt;&lt;code class="sourceCode bash"&gt;$ &lt;span class="kw"&gt;sudo&lt;/span&gt; umount /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="5" type="1"&gt;
&lt;li&gt;&lt;p&gt;reboot into &lt;em&gt;GRUB&lt;/em&gt; command line (press &lt;code&gt;c&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;for &lt;em&gt;GRUB1&lt;/em&gt;&lt;/strong&gt; enter this to boot&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;root (hd0,0)
kernel /netboot-linux priority=low
initrd /netboot-initrd.gz
boot&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="7" type="1"&gt;
&lt;li&gt;&lt;strong&gt;for &lt;em&gt;GRUB2&lt;/em&gt;&lt;/strong&gt; enter this to boot&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;set root=(hd0,msdos1)
linux /netboot-linux priority=low
initrd  /netboot-initrd.gz
boot&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://help.ubuntu.com/14.04/installation-guide/"&gt;http://help.ubuntu.com/14.04/installation-guide/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://help.ubuntu.com/community/Installation/OverSSH"&gt;http://help.ubuntu.com/community/Installation/OverSSH&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="remotely-with-debootstrap"&gt;Remotely with debootstrap&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Debian&lt;/em&gt; system and similar are so flexible that they can also be installed remotely over SSH on a machine running any version of Linux. But beware, that such procedures are extremely dangerous, fragile, and require expert knowledge. To prevent you from breaking your system, we won’t describe them here.&lt;/p&gt;
</summary><category term="ubuntu"></category><category term="install"></category></entry><entry><title>Windows XP disable system restore</title><link href="http://gw.tnode.com/windows/windows-xp-disable-system-restore/" rel="alternate"></link><updated>2012-09-18T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2005-07-09:windows/windows-xp-disable-system-restore/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Windows XP logo" height="200" src="http://gw.tnode.com/windows/img/windows-xp-logo.png" width="250"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;One of the features of &lt;em&gt;Windows XP/ME/Vista&lt;/em&gt; is a special backup utility called &lt;em&gt;System restore&lt;/em&gt;. It &lt;strong&gt;automatically creates backups&lt;/strong&gt; of important files and is used by the operating system to restore files on your computer in case they become damaged. The feature is enabled by default and it creates a hidden folder called &lt;code&gt;_Restore&lt;/code&gt; or &lt;code&gt;System Volume Information&lt;/code&gt; on each partition where it stores the data (these folders are updated when the computer restarts).&lt;/p&gt;
&lt;p&gt;Although this is a desirable functionality, &lt;strong&gt;in some cases &lt;em&gt;System restore&lt;/em&gt; should be temporarily turned off&lt;/strong&gt;. A problem appears if a virus, infected file, or other malicious software is backed up. Therefore if you play around too much you can &lt;strong&gt;accidentally restore malicious software&lt;/strong&gt; that will continue to poison your computer. This can happen even if you have a good antivirus program installed, because Windows prevents access to this folder by outside programs. You must disable the &lt;em&gt;System restore&lt;/em&gt; utility to remove the infected files.&lt;/p&gt;
&lt;h3 id="windows-xp"&gt;Windows XP&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Right click the &lt;em&gt;My Computer&lt;/em&gt; icon on the desktop and &lt;em&gt;select Properties&lt;/em&gt;. If you do not have My Computer icon you can accomplish the same thing by opening an application called &lt;em&gt;System&lt;/em&gt; which is located in the Control Panel.&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;System restore tab&lt;/em&gt; and a window like the one below should appear.&lt;/li&gt;
&lt;li&gt;Put a check mark next to &lt;strong&gt;&lt;em&gt;Turn off System restore on All Drives&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Confirm with &lt;em&gt;OK&lt;/em&gt; and you will be prompted to restart the computer. Choose &lt;em&gt;Yes&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/windows/img/windows-xp-disable-system-restore.gif"&gt;&lt;img alt="Windows XP disable system restore" height="486" src="http://gw.tnode.com/windows/img/windows-xp-disable-system-restore.gif" width="419"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Windows XP disable system restore&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;div class="alert alert-warning" role="alert"&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Disabling &lt;em&gt;System restore&lt;/em&gt; or reverting to a previously saved restore point &lt;strong&gt;could affect your personal data&lt;/strong&gt;, so create a backup just in case. &lt;strong&gt;To re-enable the &lt;em&gt;System restore&lt;/em&gt;&lt;/strong&gt;, follow the above steps, but remove the check mark next to &lt;em&gt;Disable System restore&lt;/em&gt; or &lt;em&gt;Turn off System restore on All Drives&lt;/em&gt;.
&lt;/div&gt;
&lt;h3 id="windows-me"&gt;Windows ME&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Right click on the &lt;em&gt;My Computer&lt;/em&gt; icon on your desktop and &lt;em&gt;select Properties&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Go to the &lt;em&gt;Performance tab&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Click the &lt;em&gt;File System&lt;/em&gt; button.&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;Troubleshooting tab&lt;/em&gt; and you should see a window like the one below.&lt;/li&gt;
&lt;li&gt;Put a check mark next to &lt;strong&gt;&lt;em&gt;Disable System restore&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;After you confirm with the &lt;em&gt;OK&lt;/em&gt; button you will be prompted to restart the computer. Choose &lt;em&gt;Yes&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="text-center"&gt;
&lt;a href="http://gw.tnode.com/windows/img/windows-me-disable-system-restore.gif"&gt;&lt;img alt="Windows ME disable system restore" height="328" src="http://gw.tnode.com/windows/img/windows-me-disable-system-restore.gif" width="410"/&gt;&lt;/a&gt;
&lt;figcaption&gt;
&lt;em&gt;Windows ME disable system restore&lt;/em&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.wikihow.com/Delete-System-Restore-Files"&gt;http://www.wikihow.com/Delete-System-Restore-Files&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="windows"></category><category term="setup"></category></entry><entry><title>Windows XP safe mode</title><link href="http://gw.tnode.com/windows/windows-xp-safe-mode/" rel="alternate"></link><updated>2012-09-18T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2005-07-09:windows/windows-xp-safe-mode/</id><summary type="html">
&lt;div class="panel"&gt;
&lt;figure class="panel-body text-center"&gt;
&lt;img alt="Windows XP logo" height="200" src="http://gw.tnode.com/windows/img/windows-xp-logo.png" width="250"/&gt;
&lt;/figure&gt;
&lt;/div&gt;
&lt;p&gt;Sooner or later you will need to &lt;strong&gt;start your &lt;em&gt;Windows&lt;/em&gt; in &lt;em&gt;Safe mode&lt;/em&gt;&lt;/strong&gt;. &lt;em&gt;Safe mode&lt;/em&gt; is a troubleshooting option that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are loaded and this enables you to fix various problems with device drivers or remove viruses. These instructions below will describe you how to restart in &lt;em&gt;Safe mode&lt;/em&gt; on various versions of Windows operating system.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#windows-95"&gt;&lt;em&gt;Windows 95&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#windows-98me"&gt;&lt;em&gt;Windows 98/ME&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#windows-20002003"&gt;&lt;em&gt;Windows 2000/2003&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#windows-xp"&gt;&lt;em&gt;Windows XP&lt;/em&gt;&lt;/a&gt; (all editions)&lt;/li&gt;
&lt;li&gt;&lt;a href="#windows-vista"&gt;&lt;em&gt;Windows Vista&lt;/em&gt;&lt;/a&gt; (all editions)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="windows-95"&gt;Windows 95&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Open up the &lt;em&gt;Start menu&lt;/em&gt; and click on &lt;em&gt;Shutdown&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Now select &lt;em&gt;Restart The Computer&lt;/em&gt; and confirm it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hold down the &lt;kbd&gt;F8&lt;/kbd&gt;&lt;/strong&gt; key on your keyboard as your PC restarts.&lt;/li&gt;
&lt;li&gt;If your PC starts beeping then release the key for just a second before holding it down again.&lt;/li&gt;
&lt;li&gt;Now sit back and wait for &lt;em&gt;Windows&lt;/em&gt; to start up in &lt;em&gt;Safe mode&lt;/em&gt;. If &lt;em&gt;Windows&lt;/em&gt; doesn’t restart in &lt;em&gt;Safe mode&lt;/em&gt; then repeat the process.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="windows-98me"&gt;Windows 98/ME&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Click on the &lt;em&gt;Start&lt;/em&gt; button and select &lt;em&gt;Shutdown&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Restart The Computer&lt;/em&gt; and confirm it.&lt;/li&gt;
&lt;li&gt;As the computer restarts, press and &lt;strong&gt;hold down the &lt;kbd&gt;F8&lt;/kbd&gt;&lt;/strong&gt; key until the &lt;em&gt;Windows&lt;/em&gt; Startup menu appears.&lt;/li&gt;
&lt;li&gt;If your PC starts beeping then release the key for a moment and press it again.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Select &lt;em&gt;Safe mode&lt;/em&gt;&lt;/strong&gt; from the Startup menu, and press the &lt;kbd&gt;Enter&lt;/kbd&gt; button on your keyboard.&lt;/li&gt;
&lt;li&gt;If everything went as supposed to, then &lt;em&gt;Windows&lt;/em&gt; should start in &lt;em&gt;Safe mode&lt;/em&gt;. If &lt;em&gt;Windows&lt;/em&gt; doesn’t come up in &lt;em&gt;Safe mode&lt;/em&gt; then please try again.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="windows-20002003"&gt;Windows 2000/2003&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Open up the &lt;em&gt;Start menu&lt;/em&gt; and select &lt;em&gt;Shutdown&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Restart&lt;/em&gt; and confirm it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hold down the &lt;kbd&gt;F8&lt;/kbd&gt;&lt;/strong&gt; key on your keyboard as your PC restarts until the &lt;em&gt;Windows&lt;/em&gt; Startup menu appears.&lt;/li&gt;
&lt;li&gt;If your PC starts beeping then release the key for just a second before holding it down again.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Select &lt;em&gt;Safe mode&lt;/em&gt;&lt;/strong&gt; from the Startup menu, and press the &lt;kbd&gt;Enter&lt;/kbd&gt; button on your keyboard.&lt;/li&gt;
&lt;li&gt;Now sit back and wait for &lt;em&gt;Windows&lt;/em&gt; to start up in &lt;em&gt;Safe mode&lt;/em&gt;. If &lt;em&gt;Windows&lt;/em&gt; doesn’t restart in &lt;em&gt;Safe mode&lt;/em&gt; then repeat the process.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="windows-xp"&gt;Windows XP&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Click on the &lt;em&gt;Start&lt;/em&gt; button and select &lt;em&gt;Turn off computer&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Restart&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;During restart, &lt;strong&gt;hold down the &lt;kbd&gt;F8&lt;/kbd&gt;&lt;/strong&gt; key on your keyboard until the &lt;em&gt;Windows&lt;/em&gt; Startup menu appears.&lt;/li&gt;
&lt;li&gt;If your PC starts beeping then release the key for a moment and press it again.&lt;/li&gt;
&lt;li&gt;When the Startup menu appears, use the arrow keys and &lt;strong&gt;select &lt;em&gt;Safe mode&lt;/em&gt;&lt;/strong&gt; and press &lt;kbd&gt;Enter&lt;/kbd&gt;.&lt;/li&gt;
&lt;li&gt;If everything went as supposed to, then &lt;em&gt;Windows&lt;/em&gt; should start in &lt;em&gt;Safe mode&lt;/em&gt;. If &lt;em&gt;Windows&lt;/em&gt; doesn’t come up in &lt;em&gt;Safe mode&lt;/em&gt; then please try again.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="windows-vista"&gt;Windows Vista&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Click on the &lt;em&gt;Start&lt;/em&gt; button, then on the arrow next to Lock button and select &lt;em&gt;Restart&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;During restart, &lt;strong&gt;hold down the &lt;kbd&gt;F8&lt;/kbd&gt;&lt;/strong&gt; key on your keyboard until the Advanced Boot Options screen appears.&lt;/li&gt;
&lt;li&gt;If your PC starts beeping then release the key for a moment and press it again.&lt;/li&gt;
&lt;li&gt;When the Advanced Boot Options appears, use the arrow keys and &lt;strong&gt;select &lt;em&gt;Safe mode&lt;/em&gt;&lt;/strong&gt; and press &lt;kbd&gt;Enter&lt;/kbd&gt;.&lt;/li&gt;
&lt;li&gt;If everything went as supposed to, then &lt;em&gt;Windows&lt;/em&gt; should start in &lt;em&gt;Safe mode&lt;/em&gt;. If &lt;em&gt;Windows&lt;/em&gt; doesn’t come up in &lt;em&gt;Safe mode&lt;/em&gt; then please try again.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.wikihow.com/Get-Safe-Mode-in-Windows-XP"&gt;http://www.wikihow.com/Get-Safe-Mode-in-Windows-XP&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="windows"></category><category term="setup"></category></entry><entry><title>Virus classification</title><link href="http://gw.tnode.com/windows/virus-classification/" rel="alternate"></link><updated>2012-08-08T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2005-01-15:windows/virus-classification/</id><summary type="html">
&lt;h2 id="computer-threats"&gt;Computer threats&lt;/h2&gt;
&lt;p&gt;An enormous amount of malicious software programs exist, especially for &lt;em&gt;Microsoft Windows&lt;/em&gt; operating systems, that endanger your computer or your data in various ways. Depending on the type of behavior they can be classified into a few categories.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;
&lt;dfn&gt;Adware&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
It is designed to display unwanted advertising and sometimes slowdown your computer.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Backdoor&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
Piece of software that &lt;em&gt;bypasses normal authentication&lt;/em&gt; procedures, and allows easier access in the future for the attackers.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Browser hijacker&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
Specialized software to change your home page and search engine used by web browsers for unwanted advertising or information stealing.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Keylogger&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
Specialized software capable of recording every keystroke you make on your computer, usually for stealing your passwords or other personal information.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Rootkit&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
Software packages that try to &lt;em&gt;avoid detection&lt;/em&gt; by the user by modifying the operating system. Usually they are deployed with backdoors.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Spyware&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
A special kind of trojan horse designed to spy on all user actions in all programs. Records keystrokes, passwords, personal information, emails, web surfing activity, and submits harvested information to a malicious user.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Trojan horse&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
A broad term for malicious programs &lt;em&gt;disguised as something normal&lt;/em&gt; or desirable, that attempt to trick the user to willfully install it without realizing its harmful potential.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Virus&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
A computer virus is a &lt;em&gt;self-replicating program&lt;/em&gt; that spreads by inserting copies of itself into other executable code or documents and it behaves similarly to a biological virus.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Worms&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
A special variant of computer viruses capable of infecting computers over the internet or local networks. Usually their main purpose is to attack certain web sites, send spam, or to act as a trojan horse.
&lt;/dd&gt;
&lt;dt&gt;
&lt;dfn&gt;Malware&lt;/dfn&gt;
&lt;/dt&gt;&lt;dd&gt;
A catch-all term to refer to any software designed to cause damage to a single computer, server or computer network, whether it’s a virus, spyware, trojan horse… It is a short form for malicious software.
&lt;/dd&gt;
&lt;/dl&gt;
&lt;h3 id="related"&gt;Related&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://en.wikipedia.org/wiki/Malware"&gt;http://en.wikipedia.org/wiki/Malware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://usa.kaspersky.com/internet-security-center/threats/malware-classifications"&gt;http://usa.kaspersky.com/internet-security-center/threats/malware-classifications&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</summary><category term="windows"></category><category term="virus"></category><category term="comparison"></category></entry><entry><title>Virus Admilli Service details</title><link href="http://gw.tnode.com/windows/virus-admilli-service-details/" rel="alternate"></link><updated>2012-08-08T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2005-01-12:windows/virus-admilli-service-details/</id><summary type="html">
&lt;h2 id="what-does-admilli-service-do"&gt;What does Admilli Service do?&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Admilli Service&lt;/em&gt; has the ability to &lt;strong&gt;install itself automatically&lt;/strong&gt; while surfing on the internet with &lt;em&gt;Internet Explorer&lt;/em&gt; (even under higher security level).&lt;/p&gt;
&lt;p&gt;We were unable to determine its exact activity after installation, but it looks like some sort of &lt;strong&gt;sophisticated &lt;a href="http://gw.tnode.com/windows/virus-classification/"&gt;spyware&lt;/a&gt;&lt;/strong&gt; and should not be on your PC.&lt;/p&gt;
&lt;h3 id="antivirus-solutions"&gt;Antivirus solutions&lt;/h3&gt;
&lt;p&gt;We tried to detect and clean the virus with the following antivirus and antispyware solutions, that were all up to date (on the 26th December 2004), but &lt;strong&gt;none of them found anything!&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.symantec.com/"&gt;&lt;em&gt;Symantec (Norton) Antivirus&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.nod32.com/"&gt;&lt;em&gt;Eset NOD32 Antivirus System&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://housecall.trendmicro.com/housecall/start_corp.asp"&gt;&lt;em&gt;Trend Micro HouseCall&lt;/em&gt; (online virus scanner)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.lavasoftusa.com/"&gt;&lt;em&gt;Lavasoft Ad-Aware SE Personal&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.spybot.info/"&gt;&lt;em&gt;Spybot Search &amp;amp; Destroy&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;SBC Anti-Spyware&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Therefore we came to the conclusion that the thing is yet unknown to the world and it behaves differently than common viruses (because none of the heuristic detection mechanisms found anything).&lt;/p&gt;
&lt;h3 id="more-technical-results"&gt;More technical results&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Admilli Service&lt;/em&gt; is a new spyware program that well at least in &lt;em&gt;Windows XP/98&lt;/em&gt; operating system and has the ability to install itself automatically through the &lt;em&gt;Internet Explorer&lt;/em&gt; (even under higher security restrictions in many versions of it, also in 6.0 SP2). It forces &lt;em&gt;Internet Explorer&lt;/em&gt; to execute some commands which download, copy and install an unsigned add-on on the system. After this we can see that the two new services, called &lt;code&gt;AdmilliServ.exe&lt;/code&gt; and &lt;code&gt;AdmilliKeep.exe&lt;/code&gt;, are running and from the directory &lt;code&gt;C:\Program Files\Admilli Service\&lt;/code&gt;. This two programs have the ability to execute each other after one is closed and therefore it is difficult to close them.&lt;/p&gt;
&lt;p&gt;Here are all the things that change on a system after the installation/infection:&lt;/p&gt;
&lt;p&gt;New contents in file &lt;code&gt;C:\Windows\setupapi.log&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[2004/12/26 13:50:47 880.74]
#-198 Command line processed: "C:\Program Files\Internet Explorer\iexplore.exe"
#-024 Copying file "C:\DOCUME~1\User\LOCALS~1\Temp\ICD1.tmp\AdmilliServX.dll" to "C:\WINDOWS\Downloaded Program Files\AdmilliServX.dll".
#E361 An unsigned or incorrectly signed file "C:\DOCUME~1\User\LOCALS~1\Temp\ICD1.tmp\AdmilliServX.dll" will be installed (Policy=Ignore). Error 0x800b0100: No signature was present in the subject.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A file was added to &lt;code&gt;C:\WINDOWS\Downloaded Program Files\&lt;/code&gt;. There is a new registered key control name and a file that is invisible in Explorer: &lt;code&gt;AdmilliServX.dll&lt;/code&gt; &lt;small&gt;(23.040 bytes)&lt;/small&gt;. The key can be deleted with &lt;em&gt;Explorer&lt;/em&gt;, but for the removal of the file you will need to go into MS-DOS console or use another program like &lt;em&gt;Total Commander&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;All &lt;strong&gt;installed files&lt;/strong&gt; are placed into &lt;code&gt;C:\Program Files\Admilli Service\&lt;/code&gt;. These are: &lt;code&gt;AdmilliComm.dll&lt;/code&gt; &lt;small&gt;(60.928 bytes)&lt;/small&gt;, &lt;code&gt;AdmilliKeep.exe&lt;/code&gt; &lt;small&gt;(17.920 bytes)&lt;/small&gt;, &lt;code&gt;AdmilliServ.exe&lt;/code&gt; &lt;small&gt;(26.112 bytes)&lt;/small&gt;.&lt;/p&gt;
&lt;p&gt;New &lt;strong&gt;registry entries&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="registry"&gt;&lt;code&gt;[HKEY_LOCAL_MACHINE\SOFTWARE\Admilli Service]
"param"="84ff9b0589be58f2fbb4f0b2047978d6d2c681f572f44776ea800a2822cf80fd5393a5536ca9d30e8b03:3732336438643833383439636664333333373836306136353164336534633133:Internet%20Explorer:6.0%20SP2%28SV1%29:winxp:flash"
"track"=dword:00000001
"LastUpdate"=dword:41ceb3bc
"reqcount"=dword:00000002

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Management\ARPCache\Admilli Service]
"SlowInfoCache"=hex:28,02,00,00,01,00,00,00,00,00,02,00,00,00,00,00,00,58,4d,\
  a0,9e,eb,c4,01,00,00,00,00,44,00,3a,00,5c,00,76,00,69,00,72,00,75,00,73,00,\
  5c,00,41,00,64,00,6d,00,69,00,6c,00,6c,00,69,00,20,00,53,00,65,00,72,00,76,\
  00,69,00,63,00,65,00,5c,00,41,00,64,00,6d,00,69,00,6c,00,6c,00,69,00,4b,00,\
  65,00,65,00,70,00,2e,00,65,00,78,00,65,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00
"Changed"=dword:00000000

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"Admilli Service"="C:\\Program Files\\Admilli Service\\AdmilliServ.exe"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Admilli Service]
"UninstallString"="C:\\Program Files\\Admilli Service\\AdmilliServ.exe /Remove"
"DisplayName"="Admilli Service"&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see the program also added itself to the Add or Remove Programs section in Control Panel. Because the malware came in through &lt;em&gt;Internet Explorer&lt;/em&gt; there is still a copy of it in its cache (Temporary Internet Files).&lt;/p&gt;
&lt;p&gt;You can also &lt;a href="http://gw.tnode.com/windows/f/virus/admilli-service.zip"&gt;download all files described files&lt;/a&gt; (password: &lt;kbd&gt;virus&lt;/kbd&gt;) and check it yourself.&lt;/p&gt;
&lt;h2 id="removal-instructions-the-hard-way"&gt;Removal instructions (the hard way)&lt;/h2&gt;
&lt;p&gt;You may try some of the &lt;a href="#antivirus-solutions"&gt;antispyware solutions&lt;/a&gt; described above (when they are updated) or try the following instruction for removing this malware the hard way:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;First of all you will somehow need to &lt;strong&gt;deactivate the program&lt;/strong&gt;. You will need to stop the processes named &lt;code&gt;AdmilliServ*&lt;/code&gt; and &lt;code&gt;AdmilliKeep&lt;/code&gt;, but this is not as easy as it looks like.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;easiest method&lt;/strong&gt; (for Windows XP and NT) to effectively close &lt;em&gt;Admilli Service&lt;/em&gt; until next reboot is to press &lt;kbd&gt;Ctrl+Alt+Del&lt;/kbd&gt; and selecting the Processes tab. There you will just need to end the process tree of the program. To do this &lt;em&gt;right click on &lt;code&gt;AdmilliServ&lt;/code&gt;&lt;/em&gt; and &lt;em&gt;choose End Process Tree&lt;/em&gt; option. Both &lt;code&gt;AdmilliServ&lt;/code&gt; and &lt;code&gt;AdmilliKeep&lt;/code&gt; should disappear from the processes list and you may continue with instruction 2.&lt;/li&gt;
&lt;li&gt;Another &lt;strong&gt;simple method&lt;/strong&gt; for the deletion is to restart your computer in &lt;a href="http://gw.tnode.com/windows/windows-xp-safe-mode/"&gt;Safe mode&lt;/a&gt;. When you manage to get there none of the suspicious programs are running, so you may continue with instruction 2.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;trickier method&lt;/strong&gt; (for Windows XP and NT) is to press &lt;kbd&gt;Ctrl+Alt+Del&lt;/kbd&gt; and selecting the Processes tab. There you need to lower the priority under which both of the processes are running. This can be done by right clicking on &lt;code&gt;AdmilliServ&lt;/code&gt; and then on &lt;code&gt;AdmilliKeep&lt;/code&gt; and &lt;em&gt;setting the priority to Low&lt;/em&gt;. After that you will somehow need to &lt;em&gt;give your computer some work to do&lt;/em&gt; (execute multiple programs really fast) and in the mean time (when your computer is loading all the programs) try to &lt;em&gt;select and end both processes&lt;/em&gt; &lt;code&gt;AdmilliServ&lt;/code&gt; and &lt;code&gt;AdmilliKeep&lt;/code&gt; really fast. If you are lucky and fast enough, you will be able to close both of the programs that won’t reappear again until reboot.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is also a smart idea to disable the &lt;a href="http://gw.tnode.com/windows/windows-xp-disable-system-restore/"&gt;System Restore&lt;/a&gt; option during this process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Locate&lt;/strong&gt; the directory where &lt;em&gt;Admilli Service&lt;/em&gt; installed itself into and &lt;em&gt;delete it with all the files in it&lt;/em&gt;. It can usually be found in &lt;code&gt;C:\Program Files\Admilli Service\&lt;/code&gt;. With this action you will delete the following files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AdmilliServ.exe&lt;/code&gt; &lt;small&gt;(26.112 bytes)&lt;/small&gt; - main spyware program&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AdmilliKeep.exe&lt;/code&gt; &lt;small&gt;(17.920 bytes)&lt;/small&gt; - slave program that makes it harder to close the main one&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AdmilliComm.dll&lt;/code&gt; &lt;small&gt;(60.928 bytes)&lt;/small&gt; - unknown strange dynamic link library&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At next you will need to &lt;strong&gt;edit your Registry&lt;/strong&gt;, therefore open a program called &lt;em&gt;Regedit&lt;/em&gt;. This can be done by clicking on the Run option in the Start menu and entering &lt;kbd&gt;regedit.exe&lt;/kbd&gt; inside the text field. You should use this program with care, because invalid or deleted entries may crash your computer and leave it in an unbootable state. Now you will need to &lt;em&gt;locate the following keys&lt;/em&gt; on the left side:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Locate &lt;code&gt;HKEY_LOCAL_MACHINE\SOFTWARE\Admilli Service&lt;/code&gt;, select and delete it all by right clicking on it or pressing &lt;kbd&gt;Del&lt;/kbd&gt;.&lt;/li&gt;
&lt;li&gt;Find the registry key &lt;code&gt;HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Management\ARPCache\Admilli Service&lt;/code&gt; and delete it with all its values (right click on it or press &lt;kbd&gt;Del&lt;/kbd&gt;).&lt;/li&gt;
&lt;li&gt;Go to &lt;code&gt;HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run&lt;/code&gt; and select the value called &lt;code&gt;Admilli Service&lt;/code&gt; (right click on it or press &lt;kbd&gt;Del&lt;/kbd&gt;).&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open up the &lt;em&gt;Command Prompt&lt;/em&gt; (MS-DOS console) or any other program that allows you to browse through your files except &lt;em&gt;Explorer&lt;/em&gt; (for example &lt;em&gt;Total Commander&lt;/em&gt; is a good alternative). &lt;strong&gt;Locate&lt;/strong&gt; the directory* &lt;code&gt;C:\WINDOWS\Downloaded Program Files\&lt;/code&gt; and &lt;em&gt;delete the file&lt;/em&gt; called &lt;code&gt;AdmilliServX.dll&lt;/code&gt; &lt;small&gt;(23.040 bytes)&lt;/small&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Locate the same directory again in &lt;em&gt;Explorer&lt;/em&gt;&lt;/strong&gt; and delete a strangely named key associated with &lt;code&gt;AdmilliServX.dll&lt;/code&gt; out.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open up the Control Panel&lt;/strong&gt; and choose to Add or Remove Programs. Locate &lt;em&gt;Admilli Service&lt;/em&gt; in it and click the &lt;em&gt;uninstall button&lt;/em&gt;. A window will pop up and complain that some files are missing, but that’s OK, because we removed them earlier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the end you may also &lt;strong&gt;empty your Temporary Internet Files&lt;/strong&gt; cache in &lt;em&gt;Internet Explorer&lt;/em&gt;. For this you need to select the menu Tools, then Internet options and click on the Delete files button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you can relax, because you are spyware-free or at least free from this &lt;em&gt;Admilli Service&lt;/em&gt; virus.&lt;/p&gt;
</summary><category term="windows"></category><category term="virus"></category><category term="issue"></category></entry><entry><title>Virus Admilli Service</title><link href="http://gw.tnode.com/windows/virus-admilli-service/" rel="alternate"></link><updated>2012-08-08T00:00:00+02:00</updated><author><name>gw0</name></author><id>tag:gw.tnode.com,2004-12-27:windows/virus-admilli-service/</id><summary type="html">
&lt;blockquote&gt;
&lt;p&gt;This page is dedicated to the back then yet unknown new virus threat that appeared on many &lt;em&gt;Windows XP/ME/98&lt;/em&gt; computers in January 2005 and was even spreading half a year later! Below are the results of an investigation before &lt;strong&gt;any antivirus software was able to remove or even detect it&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="what-is-admilli-service"&gt;What is Admilli Service?&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Admilli Service&lt;/em&gt; seems to be a &lt;strong&gt;adware/spyware/virus&lt;/strong&gt; threat that has the ability to infect computers running the &lt;em&gt;Windows XP/ME/98&lt;/em&gt; operating system. It can &lt;strong&gt;automatically install itself&lt;/strong&gt; into your PC when you are surfing on the internet with &lt;em&gt;Internet Explorer&lt;/em&gt; (even with a higher security level).&lt;/p&gt;
&lt;p&gt;After installation it is not yet known what it does… Maybe it logs all your input and collects your passwords, enables hackers to gain access to you computer or use it as a node for mass spamming, tries to infect other computers in you local network… We were unable to determine its exact activity and &lt;a href="http://gw.tnode.com/windows/virus-classification/"&gt;classification&lt;/a&gt;, but it looks like some sort of &lt;strong&gt;sophisticated spyware&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;(Nowadays it seems that a newer version of &lt;em&gt;Admilli Service&lt;/em&gt; is spreading in the wild and it is classified by others as adware/spyware.)&lt;/p&gt;
&lt;h3 id="antivirus-solutions"&gt;Antivirus solutions&lt;/h3&gt;
&lt;p&gt;We tried to detect and clean the virus with many different antivirus and antispyware programs (like &lt;a href="http://www.symantec.com/"&gt;Symantec Antivirus&lt;/a&gt;, &lt;a href="http://www.lavasoftusa.com/"&gt;Lavasoft Ad-Aware&lt;/a&gt;…) that were all up to date (on December 2004), but &lt;strong&gt;none of them found anything!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Therefore we came to the conclusion that the thing is yet unknown to the world and it behaves differently than common viruses.&lt;/p&gt;
&lt;h2 id="removal-instructions"&gt;Removal instructions&lt;/h2&gt;
&lt;p&gt;As nasty as the threat looks like &lt;strong&gt;it can be easily removed with a few clicks!&lt;/strong&gt; On the other hand you may try some of the newest virus removal tools (some detect it already).&lt;/p&gt;
&lt;p&gt;The virus or spyware installs itself as a fully legitimate program inside the &lt;code&gt;C:\Program Files&lt;/code&gt; directory with registry entries that result in a working uninstall function. So all you need to do is just &lt;strong&gt;open up the Control Panel&lt;/strong&gt; (in &lt;em&gt;Windows XP&lt;/em&gt; it can be found under the Start menu) and &lt;strong&gt;choose Add or Remove Programs&lt;/strong&gt;. Locate &lt;em&gt;Admilli Service&lt;/em&gt; in the list that comes up and click the &lt;strong&gt;Remove (uninstall) button&lt;/strong&gt;. After the process is finished your computer will be supposedly spyware-free. You may also temporary disable System Restore before doing anything and empty Temporary Internet Files that Internet Explorer stores on your computer (select the menu Tools, then Internet options and click on the Delete files button).&lt;/p&gt;
&lt;p&gt;The whole thing can also be removed with the &lt;a href="http://gw.tnode.com/windows/virus-admilli-service-details/#removal-instructions-the-hard-way"&gt;instructions (the hard way)&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="more-technical-results"&gt;More technical results&lt;/h3&gt;
&lt;p&gt;More details about the investigation can be found on the &lt;a href="http://gw.tnode.com/windows/virus-admilli-service-details/"&gt;details subpage&lt;/a&gt;.&lt;/p&gt;
</summary><category term="windows"></category><category term="virus"></category><category term="issue"></category></entry></feed>