Article
6 comments

Running Dash in Docker

This is part one of a short series of posts about Dash.
The repository for this blog posts is here.

Dash is an application framework to build dashboards (hence the name) or in general data visualization heavy largely customized web apps in Python. It’s written on top of the Python (web) micro-framework Flask and uses plotly.js and react.js.

The Dash manual neatly describes how to setup your first application showing a simple bar chart using static data, you could call this the ‘Hello World’ of data visualization. I always like to do my development work (and that of my teams) in a dockerized environment, so it OS agnostic (mostly, haha) and the everyone uses the same reproducible environment. It also facilitates deployment.

So let’s gets started with a docker-compose setup that will probably be all you need to get started. I use here a setup derived from my Django docker environment. If you’re interested in that one too, I can publish a post on that one as well. The environment I show here uses (well, not for the Hello World example, but at some point you might need a database, so I provide one, too) PostgreSQL, but setups with other databases (MySQL/MariaDB for relational data, Neo4J for graph data, InfluxDB for time series data … you get it) are also possible.

We’ll start with a requirements.txt file that will hold all the pyckages that need to be installed to run the app:

psycopg2>=2.7,<3.0
dash==1.11.0

Psycopg is the Python database adapter for PostgreSQL and Dash is … well, Dash. Here you will add additional database adaptors or other dependencies your app might use.

Next is the Dockerfile (and call it Dockerfile.dash) to create the Python container:

FROM python:3

ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code

COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/

We derive our image from the current latest Python 3.x image, the ENV line sets the environment variable PYTHONUNBUFFERED for Python to one. This means, that stdin, stdout and stderr are completely unbuffered, going directly to the container log (we’ll talk about that one later).
Then we create a directory named code in the root directory of the image and go there (making it the current work directory) with WORKDIR.
Now we COPY the requirements.txt file into the image, and RUN pip to install whatever is in there.
Finally we COPY all the code (and everything else) from the current directory into the container.

Now we create a docker-compose.yml file to tie all this stuff together and run the command that starts the web server:

version: '3'

services:

  pgsql:
    image: postgres
    container_name: dash_pgsql
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    ports:
      - "5432:5432"
    volumes:
      - ./pgdata:/var/lib/postgresql/data

  dash:
    build:
      context: .
      dockerfile: Dockerfile.dash
    container_name: dash_dash
    command: python app.py
    volumes:
      - .:/code
    ports:
      - "80:8080"
    depends_on:
      - pgsql

We create two so called services: a database container running PostgreSQL and a Python container running out app. The PostgreSQL container uses the latest prebuilt image, we call it dash_pgsql and we set some variables to initiate the first database and the standard database user. You can later on certainly add additional users and databases from the psql command line. To do this we export the database port 5432 to the host system so you can use any database you already have tool to manage what’s inside that database. Finally we persist the data using a shared volume in the subdirectory pgdata. This makes sure we see all the data again when we restart the container.
Then we set up a dash container using our previously created Dockerfile.dash to build the image and we call it dash_dash. This sounds a bit superfluous but this way all containers in this project will be prefixed with “dash_“. If you leave that out docker-compose will use the projects directory name as a prefix and append a “_1” to the end. If you later use Docker swarm you will possibly have multiple containers for the same service running and then they will be numbered.
The command that will be run when we start the container is python app.py. We export port 8080 (which we set in the app.py, bear with me) to port 80 on our host. You might have some other process using that port. In this case change the 80 to whatever you like (8080 for example). Finally we declare that this container needs the PostgreSQL service to run before starting. This currently is not needed but will come handy later, since the PostgreSQL containers might be a bit slow in startup. And then your app might start without a valid database resource.

The last building block is our app script itself:

import dash
import dash_core_components as dcc
import dash_html_components as html

external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']

app = dash.Dash(name, external_stylesheets=external_stylesheets)

app.layout = html.Div(children=[
  html.H1(children='Hello Dash'),

  html.Div(children='''
    Dash: A web application framework for Python.
  '''),

  dcc.Graph(
    id='example-graph',
    figure={
      'data': [
        {'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'},
        {'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'},
      ],
      'layout': {
        'title': 'Dash Data Visualization'
      }
    }
  )
])

if __name__ == '__main__':
  app.run_server(host='0.0.0.0', port=8080, debug=True)

I won’t explain too much of that code here, because this just is the first code example from the Dash manual. But note that I changed the parameters of app.run_server(). You can use any parameter here that the Flask server accepts.

To fire this all up, use first docker-compose build to build the Python image. Then use docker-compose up -d to start both services in the background. To see if they run as ppalnned use docker-compose ps. You should see two services:

Name         Command                         State  Ports
-----------------------------------------------------------------------------
dash_dash    python app.py                   Up     0.0.0.0:80->8080/tcp
dash_pgsql   docker-entrypoint.sh postgres   Up     0.0.0.0:5432->5432/tcp

Now point your browser to http://localhost (or appending whatever port you have used in the docker-compose file) and you should see:

You now can use any editor on your machine to modify the sourcecode in the project directory. Changes will be loaded automatically. If you want to look at the log output of the dash container, use docker-compose logs -f dash and you should see the typical stdout of a Flask application, including the debugger pin, something like:

dash_1 | Running on http://0.0.0.0:8080/
dash_1 | Debugger PIN: 561-251-916
dash_1 | * Serving Flask app "app" (lazy loading)
dash_1 | * Environment: production
dash_1 | WARNING: This is a development server. Do not use it in a production deployment.
dash_1 | Use a production WSGI server instead.
dash_1 | * Debug mode: on
dash_1 | Running on http://0.0.0.0:8080/
dash_1 | Debugger PIN: 231-410-660

Here you will also see when you save a new version of app.py and the web server reloads the app. To stop the environment first use CTRL-c to exit the log tailing and issue a docker-compose down. In an upcoming episode I might show you some more things you cound do with Dash and a database.

Article
7 comments

How to install dependencies from a requirements.txt file with conda

Just a little reminder: pip has this very useful option to install a bunch of packages from a single text file mostly called requirements.txt. Anaconda’s command line tool conda doesn’t support this option directly. It does support reading the package names from a file using the –yes and –file option

conda install --yes --file requirements.txt

but that does not automatically install all the dependencies. To do this, we need to iterate over the file and install each package in “single package mode”:

while read requirement; do conda install --yes $requirement; done < requirements.txt

Thanks to Luis Capelo for this snippet which I use to install dependencies in a dockerized instance of Anaconda / Jupyter (more on that in a later post).

Article
0 comment

Docker automation for PHP Developers using Python

Introduction

This posting will deal with using Docker on the developer desktop. I will not talk about deploying these containers to other stages of the track to production. Maybe this is a topic for a follow-up by me or by someone who is more apt with all things devops.

All this started when I realized, that docker-compose.yml needs an absolute path on the host for its shared volumes. This is OK but when you would like to have multiple development setups for multiple projects. What I wanted was a single config file to rule a complete set of Dockerfile and docker-compose.yml files. And a comandline tool to manage that environment without the need to juggle around with several other tools and numerous options and flags.

An intermediary state consisted of a Makefile with several shell scripts for all the stuff that was hard to do in Makefiles. It worked but was a bunch of files. I wanted something cleaner with more possibilities for the future and fewer helper files.

So here it is: a Python file to rule them all (sorry for the pun …) and build Dockerfile and docker-compose.yml from templates and a config.yml file when booting up the environment. The repository is here: https://github.com/vgoebbels/docker-php7

What you get

  • An Apache running PHP7.1 on http://localhost with document root (/var/www/html) as a shared volume in the www subdirectory
  • A MySQL database connected to that PHP container
  • A PHPMyAdmin listening on http://localhost:8080

Usage

  1. Check out from the Github repo above. Don’t mind the actual path to your environment. This will be determined and inserted into the docker-compose.yml file by the Python script.
  2. Install the required Python modules with
    pip install -r requirements.txt
  3. Have a look at the templates in the templates subfolder
  4. Edit the configuration options in config.yml
  5. Boot the setup using
    ./dockshell up
  6. Have a look at the running containers with
    ./dockshell status

What doesn’t work yet

Using ./dockshell sshweb and ./dockshell sshsql to log into the running containers. Was not able to enter interactive mode. You will have to use:

docker container exec -it <CONTAINERNAME_HERE> /bin/bash

Caveats

  • ./dockshell clean removes all containers and images. And I mean all of them. This needs to be fixed!

 

Article
1 comment

Note to self: bash path hashing /o\

Did you ever stumble across something like this:

Yes, this is strange, isn’t it? The PATH variable is set in the correct order (this is why ‘which’ finds the local Python). Googling about this behavior at first didn’t bring up any solution. But then I came across this now closed question on Stackoverflow.

So once you know what you are looking for Google reveals lots and lots of people having trouble with path hashing. Now, my solution was quite simple:

~ $ type python
python is hashed (/usr/bin/python)
~ $ hash -t python
/usr/bin/python
~ $ hash -d python
~ $ hash -t python
-bash: hash: python: not found
~ $ which python
/usr/local/bin/python
~ $ python --version
Python 3.5.3

PS: To clear the complete bash path cache just use “hash -r”.

Article
0 comment

Changing filenames to camel case

Sometimes you’ll encounter the task to rename files named with snake_case to CamelCase. And sometimes there are a lot of those files. For example when porting a CakePHP 1 project to CakePHP 2 or 3. In CakePHP the upgrade console does a decent job renaming a lot of files for you. But in larger projects having subfolders you’re left with an awful lot of unrenamed scripts. This is where my Python 3 script comes in. It renames any filenames in the current directory given as parameters (thanks to Python 3 argparse you can use wildcards!) from snake_case to CamelCase filename. It preserves the extension if the filename has one. It also has a quiet mode (use -q) to suppress any output at the command line and a preview mode just like GNU make (use -n, supersedes -q), which doesn’t do anything but print out what it would have done.

(Picture by Startup Stock Photos CC 0, http://startupstockphotos.com/)

Article
0 comment

Practical tips for using map()

When using map() you sometimes can be fooled by Pythons lazy evaluation. Many functions returning complex or iterable data don’t do this directly but return a generator object, which when iterated over, yields the result values.

But sometimes you will need the result set at once. For example when map()ing a list one would sometimes coerce Python to return the whole resulting list. This can be done by applying the list() function to the generator like this:

l=[1,2,3]
l1=map(lambda x: x+1, l)
print(l1)
<map object at 0x10f4536d8>
l1=map(lambda x: x+1, l) 
list(l1)
[2, 3, 4]

 

In line 5 I have to recreate the map object since print() seems to empty it.

When applying a standard function with map() it’s needed to qualify the module path on call:

l=["Hello  ", "  World"]
l1=map(strip, l)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'strip' is not defined

In this case it’s the str module:

l1=map(str.strip, l)
list(l1)
['Hello', 'World']

 

Thats all for now. Have fun.