knowledge_base:programming:docker

Dockerize Plotly Dash (Flask Dash) App

I use VSCode and Python virtual environment.

Example:

FROM python:alpine3.19
WORKDIR /app
COPY . /app
RUN pip install -U pip && pip install -r requirements.txt
EXPOSE 8050
CMD ["python", "./indicators.py"]

Tip: use .gitignore and .dockerignore file to exclude files you don't need

Run the following CLI commands in VS Code terminal.

Build

docker build -t georgewayne188/stock_dashboard:latest .

Run, Test

docker container run -d -p 8050:8050 georgewayne188/stock_dashboard:latest

Push to Dockerhub

docker image push georgewayne188/sock_dashboard

Reasons not to use a repository, especially public repository is that the code may have sensitive information in plain text. You will need to save the Docker image as a tar file:

docker save -o <path for generated tar file> <image name>

Then copy your image to a new system with regular file transfer tools such as cp, scp, or rsync (preferred for big files). After that you will have to load the image into Docker:

docker load -i <path to image tar file>

You should add filename (not just directory) with -o, your image syntax may need the repository prefix (:latest tag is default). For example,

docker save -o C:\path\to\file.tar repository/imagename
docker save -o C:\Users\cpan\Tools\Git\repos\stock_dashboard.tar georgewayne188/stock_dashboard:latest

PS: You may need to sudo all commands.

To import images:

Click Action > Import and choose to add from a URL or a file:

  • Add from URL: Enter the URL of the Docker Hub image page or repository, such as MySQL or https://hub.docker.com/_/mysql.
  • Add from file: Select the image file previously exported to DSM or upload an image file from your computer.

To export images:

  • Select the image you want to export.
  • Click Action > Export and select where to save the image.

By default, you can access the Dash app from http://localhost:8050. To be able to access from internet with https, use Synology reverse proxy.

See https://isolo.org/dokuwiki/knowledge_base/home_it/synology/matomo?s[]=reverse&s[]=proxy#create_reverse_proxy_for_https_access for creating reverse proxy.

from dash_auth import OIDCAuth
app = Dash(__name__)
auth = OIDCAuth(app, secret_key=os.environ['SECRET_KEY'])
auth.register_provider(
    "stock",
    token_endpoint_auth_method="client_secret_post",
    client_id=os.environ['APP_ID'],
    client_secret=os.environ['APP_SECRET'],
    server_metadata_url=os.environ['SERVER_URL'],
)

Note: OIDCAuth requires Authlib python package (import OIDCAuth would fail without it)

The following example will be able to get and remember the username who was authorized.

class OIDCAuthCustom(OIDCAuth): # overide OIDCAuth to get logged in user info

    def __init__(self, *args, **kwargs):
        self.username = None
        super().__init__(*args, **kwargs)

    def callback(self, idp: str):
        return_value = super().callback(idp)

        client = self.get_oauth_client(idp)
        self.username = client.userinfo().get("username")
        # ...

        return return_value

Synology package: SSO Server

How to debug: Look at SSO Server Log for hints.

Import Docker Image Tarball

After building the docker image and save as tarball, it can be imported to Synology docker using GUI or CLI's import function. To check the docker image is properly imported, use the following command to check

docker image ls

Running Image with Docker Compose YML

The method described here does NOT work. Reason being javascript based (Dash, Flask are all javascript based) programs are run on the client side. The SSO server thus must be accessible on the internet.

The important thing here is extra_hosts. We're running everything locally so we need to take care of several things:

  • To access the host IP address, we really need to use the sham IP address. Please see explanation here
  • SSO will reject authorization request if the well known server URL does not match its valid certificate. So we must use a valid A record such as https://www.isolo.org:5008/webman/sso/.well-known/openid-configuration. However, we don't want to open the port to external world (this will be handled by reverse proxy), and we don't need to due to the fact the the docker container is running locally as well. The solution is to add a local DNS record in the docker OS /etc/hosts file to route www.isolo.org to the local server using extra_hosts: directive.
version: "3"

services:
  server:
    container_name: stockdash
    image: georgewayne188/stock_dashboard:latest
    extra_hosts:
      - www.isolo.org:192.168.11.23
    environment:
      - USER_UID=1000
      - USER_GID=1000
    ports:
      - "8058:8050"
    restart: unless-stopped

In developing phase while working on a Windows machine, had to add a local DNS entry in my Pihole DNS server for the URL to point to local server IP address. The entire project is here

  • Last modified: 2024/10/07 15:34
  • by Normal User