Skip to main content

Modern Good Practices for Python Development

·17 mins

Python has a long history, and it has evolved over time. This article describes some agreed modern best practices.

Use a Helper to Run Python Tools #

Use either pipx or uv to run Python tools on development systems, rather than installing these applications with pip or another method. Both pipx and uv automatically put each application into a separate Python virtual environment.

Always follow the instructions on the Website to install pipx on your operating system. This will ensure that pipx works correctly with an appropriate Python installation.

Use the pipx run feature of pipx for most Python applications, or uvx, which is the equivalent command for uv. These download the application to a cache and run it. For example, these commands download and run the latest version of bpytop, a system monitoring tool:

pipx run bpytop
uvx bpytop

The bpytop tool is cached after the first download, which means that the second use of it will run as quickly as an installed application.

Use pipx install or uv tool install for tools that are essential for your development process. These options install the tool on to your system. This ensures that the tool is available if you have no Internet access, and that you keep the same version of the tool until you decide to upgrade it.

For example, if you use pre-commit you should install it, rather than use a temporary copy. The pre-commit tool automatically runs every time that we commit a change to version control, so we want it to be consistent and always available. To install pre-commit, run the appropriate command for pipx or uv:

pipx install pre-commit
uv tool install pre-commit

Using Python for Development #

Avoid Using the Python Installation in Your Operating System #

If your operating system includes a Python installation, avoid using it for your projects. This Python installation is for system tools. It is likely to use an older version of Python, and may not include all of the standard features. An operating system copy of Python should be marked to prevent you from installing packages into it, but not all operating systems set this marker.

Install Python With Tools That Support Multiple Versions #

Instead of manually installing Python on to your development systems with packages from the Python Website, use a version manager tool like mise or pyenv. These tools allow you to switch between different versions of Python. This means that you can choose a Python version for each of your projects, and upgrade them to new versions of Python later without interfering with other tools and projects that use Python. I provide a separate article on using version managers.

Alternatively, consider using Development Containers, which are a feature of Visual Studio Code and Jetbrains IDEs. Development Containers enable you to define an isolated environment for a software project, which means that it will have a completely separate installation of Python.

Whichever tool you use, ensure that it compiles Python, rather than downloading standalone builds. These standalone builds are modified versions of Python that are maintained by Astral, not the Python project.

Both the pyenv tool and the Visual Studio Code Dev Container feature automatically compile Python, but you must change the mise configuration to use compilation.

Only use the Python installation features of uv, PDM and Hatch for experimental projects. These project tools always download third-party standalone builds of Python when a user requests a Python version that is not already installed on the system.

Use the Most Recent Version of Python That You Can #

For new projects, choose the most recent stable version of Python 3. This ensures that you have the latest security fixes, as well as the fastest performance.

Upgrade your projects as new Python versions are released. The Python development team usually support each version for five years, but some Python libraries may only support each version of Python for a shorter period of time. If you use tools that support multiple versions of Python and automated testing, you can test your projects on new Python versions with little risk.

Avoid using Python 2. Older operating systems include Python 2, but it is not supported by the Python development team or by the developers of most popular Python libraries.

Use a Project Tool #

Choose a project tool for Python. There are several of these tools, each of which provides the same essential features. For example, all of these tools can generate a directory structure that follows best practices and they can all automate Python virtual environments, so that you do not need to manually create and activate environments as you work.

Poetry is currently the most popular tool for Python projects. It is mature and well-supported. Some projects use Hatch, which provides a well-integrated set of features for building and testing Python packages. Consider using PDM or uv for new projects. PDM and uv closely align to the latest Python standards.

Avoid using Rye. Rye has been superseded by uv.

You may need to create projects that include Python but cannot use Python project tools. In these cases, think carefully about the tools and directory structure that you will need, and ensure that you are familiar with the current best practices for Python projects.

Developing Python Projects #

Format Your Code #

Use a formatting tool with a plugin to your editor, so that your code is automatically formatted to a consistent style.

Consider using Ruff, which provides both code formatting and quality checks for Python code. Black was the most popular code formatting tool for Python before the release of Ruff.

Use pre-hooks to run the formatting tool before each commit to source control. You should also run the formatting tool with your CI system, so that it rejects any code that does not match the format for your project.

Use a Code Linter #

Use a code linting tool with a plugin to your editor, so that your code is automatically checked for issues.

Consider using Ruff for linting Python code. The previous standard linter was flake8. Ruff includes the features of both flake8 and the most popular plugins for flake8, along with many other capabilities.

Use pre-commit hooks to run the linting tool before each commit to source control. You should also run the linting tool with your CI system, so that it rejects any code that does not meet the standards for your project.

Use Type Hinting #

Current versions of Python support type hinting. Consider using type hints in any critical application. If you develop a shared library, use type hints.

Once you add type hints, type checkers like mypy and pyright can check your code as you develop it. Code editors will read type hints to display information about the code that you are working with. You can also add a type checker to your pre-commit hooks and CI to validate that the code in your project is consistent.

If you use Pydantic in your application, it can work with type hints. If you use mypy, add the plugin for Pydantic to improve the integration between mypy and Pydantic.

PEP 484 - Type Hints and PEP 526 – Syntax for Variable Annotations define the notation for type hinting.

Test with pytest #

Use pytest for testing. Use the unittest module in the standard library for situations where you cannot add pytest to the project.

By default, pytest runs tests in the order that they appear in the test code. To avoid issues where tests interfere with each other, always add the pytest-randomly plugin to pytest. This plugin causes pytest to run tests in random order. Randomizing the order of tests is a common good practice for software development.

To see how much of your code is covered by tests, add the pytest-cov plugin to pytest. This plugin uses coverage to analyze your code.

Package Your Projects #

Always package the applications and code libraries that you would like to share with other people. Packages enable people to use your code with the operating systems and tools that they prefer to work with, and also allow them to manage which version of your code they use.

Use wheel packages to distribute the Python libraries that you create. Read the Python Packaging User Guide for an explanation of how to build and distribute wheel packages.

You can also use wheel packages to share development tools. If you publish your Python application as a wheel, other developers can run it with uv or pipx. All wheel packages require an existing installation of Python.

For all other cases, package your applications in a format that includes a copy of the required version of Python as well as your code and the dependencies. This ensures that your code runs with the expected version of Python, and that it has the correct version of each dependency. You can package applications either in container images or as executable files.

Use container images to package Python applications that are intended to be run by a service, such as Docker or a workflow engine, especially if the application provides a network service itself, such as a Web application. You can build OCI container images with Docker, buildah and other tools. OCI container images can run on any system that uses Docker, Podman or Kubernetes, as well as on cloud infrastructure. Consider using the official Python container image as the base image for your application container images.

Use PyInstaller or Nuitka to publish desktop and command-line applications as a single executable file. Each executable file includes a copy of Python, along with your code and the required dependencies. Each executable will only run on the type of operating system and CPU that it was compiled to use. For example, an executable for Windows on Intel-compatible machines will not run on macOS.

Requirements files: If you use requirements files to build or deploy projects then configure your tools to use hashes.

Ensure That Requirements Files Include Hashes #

Python tools support hash checking to ensure that packages are valid. Some tools require extra configuration to include package hashes in the requirements files that they generate. For example, you must set the generate-hashes option for the pip-compile and uv utilities to generate requirements.txt files that include hashes. Add this option to the relevant section of the pyproject.toml file.

For pip-tools, add the option to the tool.pip-tools section:

[tool.pip-tools]
# Set generate-hashes for pip-compile
generate-hashes = true

For uv, add the option to the tool.uv.pip section:

[tool.uv.pip]
# Set generate-hashes for uv
generate-hashes = true

Language Syntax #

Create Data Classes for Custom Data Objects #

Python code frequently has classes for data objects: items that exist to store values, but do not carry out actions. If you are creating classes for data objects in your Python code, consider using either Pydantic or the built-in data classes feature.

Pydantic provides validation, serialization and other features for data objects. You need to define the classes for Pydantic data objects with type hints.

The built-in Python syntax for data classes offers fewer capabilities than Pydantic. The data class syntax does enable you to reduce the amount of code that you need to define data objects. Each data class acts as a standard Python class. Data classes also provide a limited set of extra features, such as the ability to mark instances of a data class as frozen.

PEP 557 describes data classes.

Use enum or Named Tuples for Immutable Sets of Key-Value Pairs #

Use the enum type for immutable collections of key-value pairs. Enums can use class inheritance.

Python also has collections.namedtuple() for immutable key-value pairs. This feature was created before enum types. Named tuples do not use classes.

Format Strings with f-strings #

The new f-string syntax is both more readable and has better performance than older methods. Use f-strings instead of % formatting, str.format() or str.Template().

The older features for formatting strings will not be removed, to avoid breaking backward compatibility.

PEP 498 explains f-strings in detail.

Use Datetime Objects with Time Zones #

Always use datetime objects that are aware of time zones. By default, Python creates datetime objects that do not include a time zone. The documentation refers to datetime objects without a time zone as naive.

Avoid using date objects, except where the time of day is completely irrelevant. The date objects are always naive, and do not include a time zone.

Use aware datetime objects with the UTC time zone for timestamps, logs and other internal features.

To get the current time and date in UTC as an aware datetime object, specify the UTC time zone with now(). For example:

from datetime import datetime, timezone

dt = datetime.now(timezone.utc)

Python 3.9 and above include the zoneinfo module. This provides access to the standard IANA database of time zones. Previous versions of Python require a third-party library for time zones.

PEP 615 describes support for the IANA time zone database with zoneinfo.

Use collections.abc for Custom Collection Types #

The abstract base classes in collections.abc provide the components for building your own custom collection types.

Use these classes, because they are fast and well-tested. The implementations in Python 3.7 and above are written in C, to provide better performance than Python code.

Use breakpoint() for Debugging #

This function drops you into the debugger at the point where it is called. Both the built-in debugger and external debuggers can use these breakpoints.

The breakpoint() feature was added in version 3.7 of Python.

PEP 553 describes the breakpoint() function.

Application Design #

Configuration: Use Environment Variables or TOML #

Use environment variables for options that must be passed to an application each time that it starts. If your application is a command-line tool, you should also provide options that can override the environment variables.

Use TOML for configuration files that must be written or edited by human beings. This format is an open standard that is used across Python projects and is also supported by other programming languages. For example, TOML is the default configuration file format for Rust projects.

Python 3.11 and above include tomllib to read the TOML format. If your Python software must generate TOML, you need to add Tomli-W to your project.

TOML replaces the INI file format. Avoid using INI files, even though the module for INI support has not yet been removed from the Python standard library.

Use Modern File Formats for Data #

There are now data file formats that are open, standardized and portable. If possible, use these formats:

  • JSON - Plain-text format for data objects
  • SQLite - Binary format for self-contained SQL database files
  • Apache Parquet - Binary format for efficient column-based storage

All of the versions of Python 3 include modules for JSON and SQLite.

If you need to work with other data formats, consider using a modern file format in your application and adding features to import data or generate exports in other formats when necessary.

You can use DuckDB or Pandas to import and export data to Excel file formats.

In most cases, you should use the JSON format to transfer data between systems, especially if the systems must communicate with HTTP. JSON documents can be used for any kind of data. Every programming language and modern SQL database supports JSON. You can validate JSON documents with JSON Schemas. Pydantic enables you to export your Python data objects to JSON and generate JSON Schemas from the data models.

Use SQLite files for data and configuration for applications as well as for queryable databases. They are arguably more portable and resilient than sets of plain-text files. SQLite is widely-supported, designed to be resilient and the file format is guaranteed to be stable and portable for decades. Each SQLite database file can safely be several gigabytes in size.

You can use SQLite databases for any kind of data. They can be used to store and query data in JSON format, they hold plain text with optional full-text search, and they can store binary data.

If you need to query a large set of tabular data, put a copy in Apache Parquet files and use that copy for analysis. The Parquet format is specifically designed for large-scale data operations. DuckDB and dataframes like Pandas support the Parquet format, as well as JSON and SQLite.

I provide a separate article with more details about modern data formats.

Avoid Problematic File Formats #

Avoid these older file formats:

  • CSV - Use SQLite or Apache Parquet instead
  • DBM - Use SQLite instead
  • YAML - Use TOML or JSON instead

Systems can implement legacy formats in different ways, which means that there is a risk that data will not be read correctly when you use a file that has been created by another system. Files that are edited by humans are also more likely to contain errors, due to the complexities and inconsistency of these formats.

Working with YAML Files #

If you need to work with YAML in Python, use ruamel.yaml. This supports YAML version 1.2. Avoid using PyYAML, because it only supports version 1.1 of the YAML format.

Working with CSV Files #

Python includes a module for CSV files, but consider using DuckDB instead. DuckDB provides CSV support that is tested for its ability to handle incorrectly formatted files.

If you use DuckDB or Pandas then you can import and export data to Excel file formats. Unlike CSV, Excel file formats store explicit data types for items.

Use Logging for Diagnostic Messages, Rather Than print() #

The built-in print() statement is convenient for adding debugging information, but you should use logging for your scripts and applications.

Always use a structured format for your logs, such as JSON, so that they can be parsed and analyzed later. To generate structured logs, use either the logging module in the standard library, or a third-party logging module such as structlog.

Only Use async Where It Makes Sense #

The asynchronous features of Python enable a single process to avoid blocking on I/O operations. To achieve concurrency with Python, you must run multiple Python processes. Each of these processes may or may not use asynchronous I/O.

To run multiple application processes, either use a container system, with one container per process, or an application server like Gunicorn. If you need to build a custom application that manages multiple processes, use the multiprocessing package in the Python standard library.

Code that uses asynchronous I/O must not call any function that uses synchronous I/O, such as open(), or the logging module in the standard library. Instead, you need to use either the equivalent functions from asyncio in the standard library or a third-party library that is designed to support asynchronous code.

The FastAPI Web framework supports using both synchronous and asynchronous functions in the same application. You must still ensure that asynchronous functions never call any synchronous function.

If you would like to work with asyncio, use Python 3.7 or above. Version 3.7 of Python introduced context variables, which enable you to have data that is local to a specific task, as well as the asyncio.run() function.

PEP 0567 describes context variables.

Libraries #

Handle Command-line Input with argparse #

The argparse module is now the recommended way to process command-line input. Use argparse, rather than the older optparse and getopt.

The optparse module is officially deprecated, so update code that uses optparse to use argparse instead.

Refer to the argparse tutorial in the official documentation for more details.

Use pathlib for File and Directory Paths #

Use pathlib objects instead of strings whenever you need to work with file and directory pathnames.

Consider using the the pathlib equivalents for os functions.

Methods in the standard library support Path objects. For example, to list all of the the files in a directory, you can use either the .iterdir() function of a Path object, or the os.scandir() function.

This RealPython article provides a full explanation of the different Python functions for working with files and directories.

Use os.scandir() Instead of os.listdir() #

The os.scandir() function is significantly faster and more efficient than os.listdir(). If you previously used the os.listdir() function, update your code to use os.scandir().

This function provides an iterator, and works with a context manager:

import os

with os.scandir('some_directory/') as entries:
    for entry in entries:
        print(entry.name)

The context manager frees resources as soon as the function completes. Use this option if you are concerned about performance or concurrency.

The os.walk() function now calls os.scandir(), so it automatically has the same improved performance as this function.

The os.scandir() function was added in version 3.5 of Python.

PEP 471 explains os.scandir().

Run External Commands with subprocess #

The subprocess module provides a safe way to run external commands. Use subprocess rather than shell backquoting or the functions in os, such as spawn, popen2 and popen3. The subprocess.run() function in current versions of Python is sufficient for most cases.

PEP 324 explains subprocess in detail.

Use httpx for Web Clients #

Use httpx for Web client applications. Many Python applications include requests, but you should use httpx for new projects.

The httpx package completely supersedes requests. It supports HTTP/2 and async, which are not available with requests.

Avoid using urllib.request from the Python standard library. It was designed as a low-level library, and lacks the features of requests and httpx.