.. _configuration: Configuration ============= The details of how to tune BuildGrid's configuration. .. hint:: In order to spin-up a server instance using a given ``server.yml`` configuration file, run: .. code-block:: sh bgd server start server.yml Please refer to the :ref:`CLI reference section ` for command line interface details. .. _anatomy-of-a-configuration-file Anatomy of a configuration file ------------------------------- BuilGrid configuration files describe how the server should run, and which of the various components of BuildGrid should be included. This section walks through an example configuration file to describe how each section works. BuildGrid configuration is stored in a YAML file, and uses custom YAML tags to instantiate actual Python classes at parse time. These tags are enumerated in detail in the :ref:`Parser API reference `. .. code-block:: yaml :caption: Example server configuration :name: Example server configuration server: - !channel port: 50051 insecure-mode: true description: > A simple configuration example authorization: method: none monitoring: enabled: true endpoint-type: udp endpoint-location: statsd:8125 serialization-format: statsd metric-prefix: buildgrid instances: - name: '' description: | The unique '' instance. storages: - !lru-storage &cas-backend size: 2048M schedulers: - !sql-scheduler &scheduler storage: *cas-backend connection-string: sqlite:///./example.db automigrate: yes connection-timeout: 15 poll-interval: 0.5 caches: - !lru-action-cache &build-cache storage: *cas-backend max-cached-refs: 256 cache-failed-actions: true allow-updates: true services: - !action-cache cache: *build-cache - !execution storage: *cas-backend action-cache: *build-cache scheduler: *scheduler max-execution-timeout: 7200 - !cas storage: *cas-backend - !bytestream storage: *cas-backend This is a configuration which results in a BuildGrid server listening on port 50051 containing the following gRPC services. * ActionCache service * Execution service * Operations service * Bots service * CAS service * ByteStream service Let's go through this config piece by piece. .. code-block:: yaml :caption: ``server`` key :name: server key server: - !channel port: 50051 insecure-mode: true The ``server`` key contains a list of ``Channel`` objects, which define what ports the gRPC server should bind to. These ``Channel`` objects are generated by the parser when it finds a ``!channel`` tag. .. code-block:: yaml :caption: ``description`` key :name: description key description: > A simple configuration example The ``description`` key expects a string value. This is intended to be a human-readable string describing the configuration. This key is completely optional. .. code-block:: yaml :caption: ``authorization`` section :name: authorization section authorization: method: none The ``authorization`` section specifies what auth is expected by BuildGrid. Currently BuildGrid support JWT-based authorization, allowing access to services to be restricted based on the callers' JWT content. This key is optional, and the default is no authorization. This allows all clients access to all services in the config, unless a proxy between the client and BuildGrid does some other authorization. .. code-block:: yaml :caption: ``monitoring`` section :name: monitoring section monitoring: enabled: true endpoint-type: udp endpoint-location: statsd:8125 serialization-format: statsd metric-prefix: buildgrid The ``monitoring`` section contains configuration for BuildGrid's metrics publishing functionality. This is disabled by default, but when enabled allows configuring where and how metrics should be written. This example publishes metrics to a UDP port (``endpoint-type: udp``) located at ``statsd:8125`` (``endpoint-location``). The metrics will be written in StatsD format (``serialization-format``) with the metric names prefixed with ``buildgrid`` (``metric-prefix``). .. code-block:: yaml :caption: ``instances`` section :name: instances section instances: - ... The ``instances`` section defines a list of the "instances" to serve from the configured BuildGrid. These instances are self-contained sets of gRPC services, with distinct "instance names" which allow clients to select which set of services to use. Normally there'll only be need for a single instance in any given config file, to maximise the number of gRPC handler threads available for the instance. However, multiple elements in this list are supported. .. code-block:: yaml :caption: ``thread-pool-size`` key :name: thread-pool-size key thread-pool-size: 100 This defines the size of the thread pool used to provide gRPC handler threads. This is a hard cap on the number of gRPC requests that the server can be handling at any given time, with further requests being rejected to avoid deadlocks (eg. a situation where no workers can connect because the connections are full of Execute requests, but those requests can't be handled because no workers can connect). Instance Configuration ~~~~~~~~~~~~~~~~~~~~~~ Each instance in the ``instances`` list is a complex object with several non-standard YAML tags. Let's look at the example instance in a bit more detail too. .. code-block:: yaml :caption: ``name`` key :name: name key instances: - name: "" This key defines the instance name for this instance. This name is used by clients when connecting, to provide a way to (eg) separate workload into different logical sections of infrastructure. .. code-block:: yaml :caption: ``description`` key :name: instance description key instances: - ... description: | An instance description goes here This key contains a human-readable description of the instance. This is completely optional, but allows a place for details about the instance to be documented inside the config file. .. code-block:: yaml :caption: ``storages`` section :name: storages section instances: - ... storages: - !lru-storage &cas-backend size: 512MB This section contains a list of objects tagged with one of the YAML tags which get parsed as storage backend implementations. The parsing of the tags in this section actually instantiates the storage backend objects which BuildGrid uses internally, and we hook them up with the gRPC service implementations later in the instance config. This storage is `anchored`_ as ``cas-backend``, so that we can refer to same Python object constructed from this node later in the configuration. .. _anchored: https://yaml.org/spec/1.2.2/#anchors-and-aliases .. code-block:: yaml :caption: ``schedulers`` section :name: schedulers section instances: - ... schedulers: - !sql-scheduler &scheduler storage: *cas-backend connection-string: sqlite:///./example.db automigrate: yes connection-timeout: 15 poll-interval: 0.5 This section is a list of objects tagged with one of the YAML tags which get parsed as scheduler backend implementations. Most commonly this will likely be an SQL scheduler, as in this example. The ``storage`` key in this object expects an object annotated with a storage tag. In this example we're aliasing the ``cas-backend`` storage object we anchored in the ``storages`` section. This means that our SQL scheduler will be passed the Python object we created earlier. This approach using anchors and aliases lets us define storages in a single place, and share the resulting Python objects amongst various other pieces of the config which need storages. For an in-memory storage like this example, this is important to ensure that both CAS and ByteStream are using the same data structure for storing blobs. We'll see more on that soon. The ``connection-string`` key is an SQLAlchemy-compatible connection string. It should point at your database, whether that's an SQLite database file as in this example, or a URL to a database server. Currently SQLite and PostgreSQL are supported and tested. The ``automigrate`` key tells BuildGrid whether or not it should attempt to migrate the database before using it. .. note:: For the SQL scheduler (and SQL CAS Index, which shares most of these keys), ``automigrate`` will attempt to migrate the database **at config parsing time**. This means the database must be ready to accept connections when ``bgd server`` is executed. The ``connection-timeout`` key defines how many seconds to wait for the database to respond to queries before timing out the request. The ``poll-interval`` key specifies how many seconds to wait between polling the database for current job state. This polling only happens when not using PostgreSQL, and is used to trigger sending update messages to Execution request clients when the state of the Job they requested changes. When using PostgreSQL, this number is only used to decide how frequently to check for whether to stop the thread responsible for this behaviour, since it uses ``LISTEN/NOTIFY`` to detect database updates rather than polling. Having multiple schedulers in this list is possible, but generally not recommended unless you want different services to use different database settings for some reason. .. code-block:: yaml :caption: ``caches`` section :name: caches section instances: - ... caches: - !lru-action-cache &build-cache storage: *cas-backend max-cached-refs: 256 cache-failed-actions: true allow-updates: true The ``caches`` section contains a list of ActionCache backends. These are defined with YAML tags that end in ``-action-cache``. Like the previous tags, these instantiate the actual Python objects that are used by ActionCache instances to interact with their backing store. This example creates an in-memory LRU ActionCache backend, and gives it the ``build-cache`` anchor. Like the scheduler, the ``storage`` key here takes a CAS storage backend object. Here we use a YAML reference to pass the same storage we defined earlier and used in the scheduler. ``max-cached-refs`` specifies the size of this LRU ActionCache, with ``cache-failed-actions`` toggling whether or not the ActionCache should accept ActionResults containing failures, and ``allow-updates`` specifying whether or not ``UpdateActionCache`` messages should be allowed. Setting ``allow-updates`` to be false allows the creation of a read-only ActionCache service. This isn't much use for an LRU cache, but for persistent cache implementations such as the Redis ActionCache allows a read-only client facing ActionCache, enforcing that only workers can populate the cache. .. code-block:: yaml :caption: ``services`` section :name: services section instances: - ... services: ... The ``services`` section defines the actual gRPC services that should be enabled in this instance. In this list we'll use more YAML tags to instantiate the service instances, and pass references to the storages, schedulers, and caches defined earlier. .. code-block:: yaml :caption: ActionCache service instances: - ... services: - !action-cache cache: *build-cache The ``!action-cache`` tag instantiates an ActionCache instance. Here we pass the anchored ``build-cache`` cache backend that we defined earlier to the ``cache`` key, ensuring that our ActionCache instance uses the cache we created earlier. .. code-block:: yaml :caption: Execution service instances: - ... services: ... - !execution storage: *cas-backend action-cache: *build-cache scheduler: *scheduler max-execution-timeout: 7200 The ``!execution`` tag creates an Execution instance, and by default a BotsInterface and an Operations instance too. Here we pass our previously created storage, cache, and scheduler backends to the appropriate config keys using YAML references. Note that this means our Execution service and ActionCache implementation are using the same backend object. We also specify a ``max-execution-timeout`` for our Execution service here. Selecting only a subset of services is done using the ``endpoints`` key, set to a list containing the services you want to enable defined as ``execution``, ``bots``, and ``operations``. The Execution service can take a number of other configuration options, which are listed in the :ref:`Parser API reference `. .. code-block:: yaml :caption: CAS and ByteStream services instances: ... - services: ... - !cas storage: *cas-backend - !bytestream storage: *cas-backend The ``!cas`` tag creates a CAS instance, whilst the ``!bytestream`` tag creates a ByteStream instance. For a CAS to function correctly, both of these services need to be present in the instance configuration. We pass the same anchored storage backend to both services, so that they're both working to serve the same content. Key points ~~~~~~~~~~ * Enabled gRPC services are in the ``instances`` -> ``services`` section of the configuration. * The YAML tags instantiate actual Python objects. * Reuse these objects using YAML anchors and references, to make sure everything gets wired up correctly. .. _server-config-reference: Reference configuration ----------------------- Below is an example of the full configuration reference: .. literalinclude:: ../../../buildgrid/_app/settings/reference.yml :language: yaml See the :ref:`Parser API reference ` for details on the tagged YAML nodes in this configuration. .. _deployment-guidance: Deployment Guidance ------------------- BuildGrid is designed to be flexible about deployment topology. Each of the services it can provide can be configured in any combination in a given server. This section provides some example configuration files for different deployment topologies. For details of the services, see :ref:`understanding-the-configuration-file`. All-in-one ~~~~~~~~~~ .. literalinclude:: ../../../data/config/default.yml :language: yaml This configuration includes all the services required for remote execution and caching in a single gRPC server. This is an ideal configuration for trying out BuildGrid locally, but not recommended for production. With this deployment you'll likely run into issues with the number of threads available to handle incoming requests pretty quickly if running this in a production environment. In this configuration, all requests are sent to the same endpoint (which is exposed on port 50051). Separate Execution and CAS/ActionCache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example is for deploying two separate gRPC servers, one exposing the Execution, Operations, and Bots services, and the other exposing the CAS, ByteStream, and ActionCache services. In general, there's unlikely to be a good reason to not colocate the CAS and ByteStream services no matter what the rest of your deployment looks like. .. literalinclude:: ../data/execution-and-bots.yml :language: yaml :caption: Execution, Operations, and Bots services This configuration file defines the Execution, Operations, and Bots services. The Bots service is defined separately to give an example of how it can be independently defined. The ``!execution`` tag still supports including a Bots service if defined as follows. .. code-block:: yaml - !execution storage: *remote-cas action-cache: *remote-cache data-store: *state-database max-execution-timeout: 7200 endpoints: - execution - operations - bots Omitting the ``endpoints`` key has the same effect, as all three services is currently the default option. .. literalinclude:: ../data/cas-and-ac.yml :language: yaml :caption: CAS, ByteStream, and ActionCache services This configuration file defines the CAS, ByteStream, and ActionCache services. These are the services referenced by the ``!remote-storage`` and ``!remote-action-cache`` tags in the earlier configuration. This configuration is a bit more production-ready than the all-in-one example, however there are a few limitiations still. - PostgreSQL should be used for the scheduler's data store, rather than SQLite - The ActionCache is probably too small for real use. - The ActionCache as configured here won't support horizontal scaling, which will be needed to handle a good amount of incoming requests (due to the thread limit being set to 1000). It is also possible (if your client supports it) to split out the services further, for example splitting the ActionCache service out into a separate server, and similarly moving out the Bots service. Its worth noting that Bazel doesn't support that topology for the ActionCache, since it assumes the ActionCache to be colocated with CAS. This kind of further splitting can be useful for targetting specific parts of the deployment for horizontal scaling. Behind a Proxy ~~~~~~~~~~~~~~ BuildGrid can be deployed behind a gRPC proxy to allow services to be deployed separately as described above, whilst providing the ease of having all services exposed via a single URL. This also avoids the aforementioned need to colocate the CAS and ActionCache in order to support Bazel as a client, since pointing Bazel at a proxy which can route to separate CAS and ActionCache services is functionally the same. BuildGrid should work behind any web server which can handle routing gRPC requests, for example `nginx`_ or `Envoy`_. The proxy should be configured to route requests the the relevant service, with GetCapabilities requests being routed to the Execution service. The Execution service has a special handling of GetCapabilities requests, whereby it also forwards the request to the CAS and ActionCache it is configured to use, and combines the results before returning. This allows it to effectively report on the capabilities of the whole BuildGrid deployment. .. _nginx: https://www.nginx.com/ .. _Envoy: https://www.envoyproxy.io/ .. literalinclude:: ../data/nginx-routing.conf :caption: Example nginx config In this example routing is done on a service level, with each request being routed to the relevant backend BuildGrid service. Note that requests to Capabilities are routed to the Execution service. A more complex deployment may find it useful to route at the request level, for example routing ByteStream Write requests to a specific place. .. _configuration-location: Configuration location ---------------------- Unless a configuration file is explicitly specified on the command line when invoking `bgd`, BuildGrid will always attempt to load configuration resources from ``$XDG_CONFIG_HOME/buildgrid``. On most Linux based systems, the location will be ``~/.config/buildgrid``. This location is refered as ``$CONFIG_HOME`` is the rest of the document. .. _tls-encryption: TLS encryption -------------- Every BuildGrid gRPC communication channel can be encrypted using SSL/TLS. By default, the BuildGrid server will try to setup secure gRPC endpoints and return in error if that fails. You must specify ``--allow-insecure`` explicitly if you want it to use non-encrypted connections. The TLS protocol handshake relies on an asymmetric cryptography system that requires the server and the client to own a public/private key pair. BuildGrid will try to load keys from these locations by default: - Server private key: ``$CONFIG_HOME/server.key`` - Server public key/certificate: ``$CONFIG_HOME/server.crt`` - Client private key: ``$CONFIG_HOME/client.key`` - Client public key/certificate: ``$CONFIG_HOME/client.crt`` Server key pair ~~~~~~~~~~~~~~~ The TLS protocol requires a key pair to be used by the server. The following example generates a self-signed key ``server.key``, which requires clients to have a copy of the server certificate ``server.crt``. You can of course use a key pair obtained from a trusted certificate authority instead. .. code-block:: sh openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -batch -subj "/CN=localhost" -out server.crt -keyout server.key Client key pair ~~~~~~~~~~~~~~~ If the server requires authentication in order to be granted special permissions like uploading to CAS, a client side key pair is required. The following example generates a self-signed key ``client.key``, which requires the server to have a copy of the client certificate ``client.crt``. .. code-block:: sh openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -batch -subj "/CN=client" -out client.crt -keyout client.key .. _persisting-state: Persisting Internal State ------------------------- BuildGrid's Execution and Bots services can be configured to store their internal state (such as the job queue) in an external data store of some kind. At the moment the only supported type of data store is any SQL database with a driver supported by SQLALchemy. This makes it possible to restart a BuildGrid process while preserving the Job Queue, alleviating concerns about having to finish currently queued work before restarting the scheduler or else losing track of that work. Upon restarting, BuildGrid will load the jobs it previously knew about from the data store, and recreate its internal state. However note that: * Previous connections will need to be recreated. * For clients, that can be done by sending a WaitExecution request with the relevant operation name. * For bots, they can re-register by sending a CreateBotSession request to accept more work. * Work executing during the restart will be re-assigned to a capable, newly-registered bot when it gets picked up from the queue, thus progress will be lost. .. hint:: **Permissive BotSession Mode** is an option for the Bots Interface, which allows configurations using a persistent scheduler to verify some of the ongoing leases that were assigned by a different BuildGrid process, making it possible to keep progress done on a lease while BuildGrid is restarting (or in a round-robin BuildGrid cluster). However, enabling this option may cause issues if used with the `bot_session_keepalive_timeout` option, e.g. BuildGrid re-queuing some jobs and cancelling the relevant existing leases, if the bots start talking another BuildGrid process while executing the job while the previous process(es) they were talking to are still running and have the `bot_session_keepalive_timeout` option enabled. This will work well in cases where the bot is able to talk to the same BuildGrid process except for when that process is restarting (for example in a primary/backup or sticky-session set-up, configured at the DNS level). To use this feature, use the following option in the scheduler config: .. code-block:: yaml services: ... - !execution storage: ... action-cache: ... scheduler: ... permissive-bot-session: True ... SQL Database ~~~~~~~~~~~~ The SQL data store implementation uses SQLAlchemy to connect to a database for storing the job queue and related state. There are database migrations provided, and BuildGrid can be configured to automatically run them when connecting to the database. Alternatively, this can be disabled and the migrations can be executed manually using Alembic. When using the SQL Data Store with the default configuration (e.g. no `connection-string`), a temporary SQLite database will be created for the lifetime of BuildGrid's execution. .. hint:: **SQLite in-memory databases are not supported** by BuildGrid to ensure multiple threads can share the same state database without any issues (using SQLAlchemy's `StaticPool`). SQLite Configuration Block Example '''''''''''''''''''''''''''''''''' .. code-block:: yaml instances: - name: '' storages: - !lru-storage &cas-storage size: 2048M schedulers: - !sql-scheduler &state-database storage: *cas-storage connection-string: sqlite:////path/to/sqlite.db # ... or don't specify the connection-string and BuildGrid will create a tempfile services: - !execution storage: *cas-storage scheduler: *state-database PostgreSQL Configuration Block Example '''''''''''''''''''''''''''''''''''''' .. code-block:: yaml instances: - name: '' storages: - !lru-storage &cas-storage size: 2048M schedulers: - !sql-scheduler &state-database storage: *cas-storage connection-string: postgresql://username:password@sql_server/database_name # SQLAlchemy Pool Options pool-size: 5 pool-timeout: 30 pool-pre-ping: yes pool-recycle: 3600 max-overflow: 10 services: - !execution storage: *cas-storage scheduler: *state-database With ``automigrate: no``, the migrations can be run by cloning the `git repository`_, modifying the ``sqlalchemy.url`` line in ``alembic.ini`` to match the ``connection-string`` in the configuration, and executing .. code-block:: sh tox -e venv -- alembic --config ./alembic.ini upgrade head in the root directory of the repository. The docker-compose files in the `git repository`_ offer an example approach for PostgreSQL. .. hint:: For the creation of the database and depending on the permissions and database config, you may need to create and initialize the database before Alembic can create all the tables for you. If Alembic fails to create the tables because it cannot read or create the ``alembic_version`` table, you could use the following SQL command: .. code-block:: sql CREATE TABLE alembic_version ( version_num VARCHAR(32) NOT NULL, CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)) .. _git repository: https://gitlab.com/BuildGrid/buildgrid .. _monitoring-configuration: Automatic job pruning ''''''''''''''''''''' When a job completes, its associated record will remain in the database so that queries continue to reflect its status. The automatic pruning mechanism ensures that jobs that have been completed for longer than a given age are removed from the database, freeing up space. When enabled, a cleanup routine will spawn periodically every ``pruner-period`` and delete jobs that are older than ``pruner-job-max-age``. Internally, it follows this logic: .. code-block:: pseudocode pruning_thread(): every pruning-period: delete at most max-delete-window Jobs older than jobs-max-age Because the delete operation will block the database, another option, ``pruner-max-delete-window``, allows setting an upper bound on the number of records that can be deleted in one pass. .. note:: A lower ``pruner-max-delete-window`` size will make each pruning pass less expensive but will make the recovery of free space take longer. Configuration ^^^^^^^^^^^^^ The example below shows an SQL-backed scheduler that will keep jobs for 90 days after their completion, pruning at most 10k database entries every 48 hours. Durations can be specified as floating-point amounts of ``weeks``, ``days``, ``hours``, and combinations thereof. .. code-block:: yaml schedulers: - !sql-scheduler &state-database storage: *cas-storage connection-string: sqlite:///./example.db automigrate: yes connection-timeout: 15 poll-interval: 0.5 # Automatic pruning options: pruner-job-max-age: days: 90 pruner-period: hours: 48 pruner-max-delete-window: 10000 Monitoring and Metrics ---------------------- BuildGrid provides a mechanism to output its logs in a number of formats, in addition to printing them to stdout. Log messages can be formatted as JSON or the binary form of the protobuf messages, and can be written to a file, a UNIX domain socket, or a UDP port. BuildGrid also provides some metrics to give insight into the current health and utilisation of the BuildGrid instance. These metrics are protobuf messages similar to the log messages, and can be configured in the same way. Additionally, metrics can be formatted as statsd metrics strings, to allow simply configuring BuildGrid to output its metrics to a remote StatsD server. If the ``statsd`` format is used, then log messages are dropped and only metrics are written to the configured endpoint. The log messages are still written to stdout in this situation. See :ref:`monitoring` for more details on monitoring options. StatsD Metrics ~~~~~~~~~~~~~~ A common monitoring set up is to have metrics published into a StatsD server, for aggregation and display using a tool like Grafana. BuildGrid's ``udp`` monitoring ``endpoint-type`` supports this trivially. This configuration snippet will cause metrics to be published with a ``buildgrid`` prefix to a StatsD server listening on port 8125 with a hostname ``statsd-server`` which is resolvable by the BuildGrid instance. .. code-block:: yaml monitoring: enabled: true endpoint-type: udp endpoint-location: statsd-server:8125 serialization-format: statsd metric-prefix: buildgrid Server Reflection ---------------------- For every service specifed in the configuration file, ``buildgrid`` supports `server reflection`_. This allows clients to send requests to specific services, without knowing/having the protos. For example, listing the details of an operation currently ongoing using the `grpccli`_, can be done as follows: .. code-block:: bash ./grpc_cli call localhost:50051 GetOperation "name: '46a5640e-c3c5-4c7e-b622-df0709540107'" connecting to localhost:50051 { "name": "dev/46a5640e-c3c5-4c7e-b622-df0709540107", "metadata": { "@type": "type.googleapis.com/build.bazel.remote.execution.v2.ExecuteOperationMetadata", "stage": "QUEUED", "actionDigest": { "hash": "267d1ff6e8d45b812fbc535fdbb8b69cbd6f7401ac3cc4ba21daa02750045906", "sizeBytes": "138" } }, "response": { "@type": "type.googleapis.com/build.bazel.remote.execution.v2.ExecuteResponse" } } Rpc succeeded with OK status `server reflection`_ is enabled by default, and can be disabled by specifying the following key in the yaml configuration: ``server-reflection: false`` .. _server reflection: https://github.com/grpc/grpc/blob/master/doc/python/server_reflection.md .. _grpccli: https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md