Jason Haley wrote a brief tutorial to get the Pythonista started with Kubernetes. Worth reading if you are new to the topic.
So, you know you want to run your application in Kubernetes but don’t know where to start. Or maybe you’re getting started but still don’t know what you don’t know. In this blog you’ll walk through how to containerize an application and get it running in Kubernetes.This walk-through assumes you are a developer or at least comfortable with the command line (preferably bash shell).
Celery is a distributed task execution environment for Python. While the emphasis is on distributed in this software, the concept of having workers allows for settings beyond the individual task. While the first rule of optimisation is “don’t”, sharing database connections is a low hanging fruit in most cases. And this can be configured per worker with Celery provided signals. To create a database connection for individual worker instances, leverage these signals to create the connection when the worker starts.
This can be achieved leveraging the worker_process_init signal, and the corresponding worker_process_shutdown signal to clean up when the worker shuts down.
The code should obviously be picked up at worker start, hence the tasks.py file will be a good location to keep these settings.
from celery.signals import worker_process_init
from celery.signals import worker_process_shutdown
app = Celery('tasks', broker=CELERY_BROKER_URL)
db = None
log.debug('Initializing database connection for worker.')
db = sqlite3.connect("urls.sqlite")
log.debug('Closing database connectionn for worker.')
The example above opens a connection to a sqlite3 database, which in itself has other issues, but is only meant as an example. This connection is established for each individual worker at startup.
New languages enter the scene, and big data makes its mark
Spoiler: basically, all is the same as past year, but R made a jump up by 4 positions and ranks 6th now. R is a statistical language, capable of munging huge amounts of data, hence the Big Data reference in the article.