Flagsmith's common library
This project uses Poetry for dependency management and includes a Makefile to simplify common development tasks.
- Python >= 3.11
- Make
You can set up your development environment using the provided Makefile:
# Install everything (pip, poetry, and project dependencies)
make install
# Individual installation steps are also available
make install-pip # Upgrade pip
make install-poetry # Install Poetry
make install-packages # Install project dependencies
Run linting checks using pre-commit:
make lint
Additional options can be passed to the install-packages
target:
# Install with development dependencies
make install-packages opts="--with dev"
# Install with specific extras
make install-packages opts="--extras 'feature1 feature2'"
-
To make use of the
test_tools
Pytest plugin, install the packages with thetest-tools
extra, e.g.pip install flagsmith-common[test-tools]
. -
Make sure
"common.core"
is in theINSTALLED_APPS
of your settings module. This enables themanage.py flagsmith
commands. -
Add
"common.gunicorn.middleware.RouteLoggerMiddleware"
toMIDDLEWARE
in your settings module. This enables theroute
label for Prometheus HTTP metrics. -
To enable the
/metrics
endpoint, set thePROMETHEUS_ENABLED
setting toTrue
.
To test your metrics using the assert_metric
fixture:
from common.test_tools import AssertMetricFixture
def test_my_code__expected_metrics(assert_metric: AssertMetricFixture) -> None:
# When
my_code()
# Then
assert_metric(
name="flagsmith_distance_from_earth_au_sum",
labels={"engine_type": "solar_sail"},
value=1.0,
)
The saas_mode
fixture makes all common.core.utils.is_saas
calls return True
.
The enterprise_mode
fixture makes all common.core.utils.is_enterprise
calls return True
.
Use this mark to auto-use the saas_mode
fixture.
Use this mark to auto-use the enterprise_mode
fixture.
Flagsmith uses Prometheus to track performance metrics.
The following default metrics are exposed:
flagsmith_build_info
: Has the labelsversion
andci_commit_sha
.flagsmith_http_server_request_duration_seconds
: Histogram labeled withmethod
,route
, andresponse_status
.flagsmith_http_server_requests_total
: Counter labeled withmethod
,route
, andresponse_status
.flagsmith_http_server_response_size_bytes
:Histogram labeled withmethod
,route
, andresponse_status
.flagsmith_task_processor_enqueued_tasks_total
: Counter labeled withtask_identifier
.
flagsmith_task_processor_finished_tasks_total
: Counter labeled withtask_identifier
,task_type
("recurring"
,"standard"
) andresult
("success"
,"failure"
).flagsmith_task_processor_task_duration_seconds
: Histogram labeled withtask_identifier
,task_type
("recurring"
,"standard"
) andresult
("success"
,"failure"
).
Try to come up with meaningful metrics to cover your feature with when developing it. Refer to Prometheus best practices when naming your metric and labels.
As a reasonable default, Flagsmith metrics are expected to be namespaced with the "flagsmith_"
prefix.
Define your metrics in a metrics.py
module of your Django application — see example. Contrary to Prometheus Python client examples and documentation, please name a metric variable exactly as your metric name.
It's generally a good idea to allow users to define histogram buckets of their own. Flagsmith accepts a PROMETHEUS_HISTOGRAM_BUCKETS
setting so users can customise their buckets. To honour the setting, use the common.prometheus.Histogram
class when defining your histograms. When using prometheus_client.Histogram
directly, please expose a dedicated setting like so:
import prometheus_client
from django.conf import settings
flagsmith_distance_from_earth_au = prometheus_client.Histogram(
"flagsmith_distance_from_earth_au",
"Distance from Earth in astronomical units",
labels=["engine_type"],
buckets=settings.DISTANCE_FROM_EARTH_AU_HISTOGRAM_BUCKETS,
)
For testing your metrics, refer to assert_metric
documentation.