Skip to content

Commit

Permalink
docs: minor improvements and typos
Browse files Browse the repository at this point in the history
  • Loading branch information
metacosm authored and bpetit committed Mar 5, 2022
1 parent 4359e1d commit 8273ced
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 25 deletions.
2 changes: 1 addition & 1 deletion docs_src/compatibility.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Compatibility

Scaphandre intends to provide multiple ways to gather power consumption metrics and make understanding tech services footprint possible in many situations. Depending on how you use scaph, you may have some restrictions.
Scaphandre intends to provide multiple ways to gather power consumption metrics and make understanding tech services footprint possible in many situations. Depending on how you use scaphandre, you may have some restrictions.

To summarize, scaphandre should provide two ways to estimate the power consumption of a service, process or machine. Either by **measuring it**, using software interfaces that give access to hardware metrics, or by **estimating it** if measuring is not an option (this is a [planned feature](https://github.com/hubblo-org/scaphandre/issues/25), not yet implemented as those lines are written, in december 2020).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ Scaphandre is a tool that makes it possible to see the power being used by a sin

This sounds like a simple thing thing to be able to do, but in practice a number of details can make this more complex.

So having a good mental for how it works will make it understand when and how to use Scaphandre. For simplicity we start with a simplified mental model below, before thinking about multiple processors or virtual machines - but once you understand the key ideas outlined below, it's easier to see how they can be applied to thinking about tracking power on in virtual machines, or when we have multiple processors available.
So having a good mental model of how it works will make it easier to understand when and how to use Scaphandre. Let's start with a simplified mental model below, before moving on to multiple processors or virtual machines - but once you understand the key ideas outlined below, it's easier to see how they can be applied to thinking about tracking power on virtual machines, or when multiple processors are available.

### How a computer works on multiple jobs at the same time

When we first think about how much energy a single process running in a computer might use, we might start with a mental model that looks like the figure below, with large, uninterrupted chunks of compute time allocated to each process.
Expand All @@ -15,38 +16,36 @@ This is easy to understand, and it matches how we might be billed for a share of

#### Timesharing of work

However, if the reality was _exactly_ like this diagram, our computers would only ever be able to do something at a time. It's more accurate and helpful to think of computers working on lots of different jobs at the same time - they work on one job for short interval of time, then another, and another and so one. You'll often see these [small intervals of time referred to as _[jiffies][]_.

[jiffies]: https://www.anshulpatel.in/post/linux_cpu_percentage/
However, if the reality was _exactly_ like this diagram, our computers would only ever be able to do one thing at a time. It's more accurate and helpful to think of computers working on lots of different jobs at the same time - they work on one job for short interval of time, then another, and another and so one. You'll often see these small intervals of time referred to as _[jiffies](https://www.anshulpatel.in/post/linux_cpu_percentage/)_.

![work on jobs is split into jiffies](../img/jiffies.png)

In a given amount of time, certain jobs that are more important, or resource intensive will use more jiffies than others. Fortunately, each job keeps a running total of the total jiffies allocated to it, so if we know how many jiffies have been used in total, it can give us an idea how much of a machine's resources are being used by a given process.
In a given amount of time, some jobs that are prioritized or more resource intensive will use more jiffies than others. Fortunately, each job keeps a running total of the total jiffies allocated to it, so if we know how many jiffies have been used in total, it can give us an idea how much of a machine's resources are being used by a given process.

![work on jobs is split into jiffies](../img/total-time-share.png)
### Going from share of resources to actual power figures

It's possible without Scaphandre to understand how large a share of a machines' resources are being used by a given process.

This is useful, by itself, but if we want to understand how much _power_ used per process, not just the share of the machine's resources, we need to know how much power is being used by the machine in absolute terms.
This is useful, by itself, but if we want to understand how much _power_ is used per process, not just the share of the machine's resources, we need to know how much power is being used by the machine in absolute terms.

To do this, we need a sensor of some kind to track power usage by the machine itself. Some servers have these, like with Intel's RAPL sensors, which we cover this in more detail later on. This makes it possible to understand how much power is being used by CPUs, GPUs and so on, in terms of watts, or if we are looking at just a single process, various fractions of a watt.
To do this, we need a sensor of some kind to track power usage by the machine itself. Some servers have these, like with Intel's RAPL sensors, which we cover this in more detail later on. This makes it possible to understand how much power is being used by CPUs, GPUs and so on, in terms of watts, or, if we are looking at just a single process, various fractions of a watt.

![Sensors provide power over time](../img/power-over-time.png)

To understand the power used by a single process we combine both of these ideas. We count the jiffies used by _our_ job when it's being worked on, and for each jiffie, we check how much power is being drawn at those moments in time in absolute terms.
To understand the power used by a single process we combine both of these ideas. We count the jiffies used by _our_ job when it's being worked on, and for each jiffy, we check how much power is being drawn at those moments in time.

![Combined we can see how much the power during 'our' jiffies](../img/power-and-share-of-usage.png)

Finally, when we group together all the power readings for all our jiffies over a given time period, we can arrive at a useable figure for how much power has been used, in terms of watt hours.
Finally, when we group together all the power readings for all our jiffies over a given time period, we can arrive at a usable figure for how much power has been used, in terms of watt hours.

Once you have a figure in terms of watt hours, there are various ways you can convert this to environmental impact. A common way is to use an _emission factor_ for the electricity used, to turn it into a quantity of carbon emissions.
Once we have a figure in terms of watt hours, there are various ways we can convert this to environmental impact. A common way is to use an _emission factor_ for the electricity used, to turn the power consumption data into an estimate of associated carbon emissions.

![Combined we can see how much the power during 'our' jiffies](../img/power-by-process.png)

### Working with virtualisation and multiple processors

While the reality is again more complicated than the diagram below, you ideas broadly apply when you introduce multiple processors too.
While the reality is again more complicated than the diagram below, the same ideas broadly apply when you introduce multiple processors too.

If you are able to read from sensors that can share how much power is being used by the various processors at work, and know how much of the time is being allocated to our processes during those moments, you can get a good idea of what these figures are, at a per-process level.

Expand All @@ -61,31 +60,31 @@ However, if a guest virtual machine or guest container _does_ have access to rea

## More details about how Scaphandre works


As you can see with the [prometheus exporter reference](../references/exporter-prometheus.md), scaphandre exporters can provide process level power consumption metrics. This section will explain how it is done and how it may be improved in the future.

## Some details about RAPL

We'll talk here about the case where scaphandre is able to effectively measure the power consumption of the host (see [compatibility](../compatibility.md) section for more on sensors and their prerequisites) and specifically about the [PowercapRAPL](../references/sensor-powercap_rapl.md) sensor.

Let's clarify what's happening when you collect metrics with scaphandre and this sensor.
RAPL stands for [Running Average Power Limit](https://01.org/blogs/2014/running-average-power-limit-%E2%80%93-rapl). It's a technnology embedded in most Intel and AMD x86 CPUs produced after 2012.
Let's clarify what's happening when you collect metrics with scaphandre and the RAPL sensor.
RAPL stands for [Running Average Power Limit](https://01.org/blogs/2014/running-average-power-limit-%E2%80%93-rapl). It's a technology embedded in most Intel and AMD x86 CPUs produced after 2012.

Thanks to this technology it is possible to get the total energy consumption of the CPU, of the consumption per CPU socket, plus in some cases, the consumption of the DRAM controller. In most cases it represents the vast majority of the energy consumption of the machine (except when running GPU intensive workloads, for example).

Further improvements shall be made in scaphandre to fully measure the consumption when GPU are involved (or a lot of hard drives on the same host...).

Between scaphandre and those data is the powercap kernel module that writes the energy consumption in files. Scaphandre, reads those files, stores the data in buffer and then allows for more processing through the exporters.
Between scaphandre and this data is the powercap kernel module that writes the energy consumption to files. Scaphandre reads those files, stores the data in buffers and then allows for more processing through the exporters.

### How to get the consumption of one process ?

The PowercapRAPL sensor does actually some more than just collecting those energy consumption metrics (and casting it in power consumption metrics).
The PowercapRAPL sensor does actually more than just collecting those energy consumption metrics (and converting them to power consumption metrics).

Every time the exporter asks for a measurement (either periodically like in the [Stdout](../references/exporter-stdout.md) exporter, or every time a request comes like for the Prometheus exporter) the sensor reads the values of the energy counters from powercap. It then stores those values, and does the same for the CPU usage statistics of the CPU (the one you can see in `/proc/stats`) and for each running process on the machine at that time (see `/proc/PID/stats`).
Every time the exporter asks for a measurement (either periodically like in the [Stdout](../references/exporter-stdout.md) exporter, or every time a request comes as would be the case for the Prometheus exporter) the sensor reads the values of the energy counters from powercap. It then stores those values, and does the same for the CPU usage statistics of the CPU (the one you can see in `/proc/stats`) and for each running process on the machine at that time (see `/proc/PID/stats`).

With those data it is possible to compute the ratio of CPU time actively spent for a given PID on the CPU time actively spent doing something. With this ratio we can then get the subset of power consumption that is related to that PID on a given timeframe (between two measurement requests).
With this data it is possible to compute the ratio of CPU time actively spent for a given PID on the CPU time actively spent doing something. With this ratio we can then get the subset of power consumption that is related to that PID on a given timeframe (between two measurement requests).

### How to get the consumption of an application/a service ?

Services and programs are often not running only one PID. It's needed to aggregate the consumption of all related PIDs to know what this service is actually consuming.
Services and programs are often not running on only one PID. It's needed to aggregate the consumption of all related PIDs to know what this service is actually consuming.

To do that, in the current state of scaphandre development, you can use the Prometheus exporter, and then use Prometheus TSDB and query language capabilities. You'll find examples looking at the graphs and queries [here](https://metrics.hubblo.org). In a near future, more advanced features may be implemented in scaphandre to allow such classification even if you don't have access to a proper TSDB.
To do that, in the current state of scaphandre development, you can use the Prometheus exporter, and then use Prometheus and its query language capabilities. You'll find examples looking at the graphs and queries [here](https://metrics.hubblo.org). In a near future, more advanced features may be implemented in scaphandre to allow such classification even if you don't have access to a proper [Time Series database (TSDB)](https://en.wikipedia.org/wiki/Time_series_database).
8 changes: 4 additions & 4 deletions docs_src/explanations/internal-structure.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Internal structure

Scaphandre is designed to be extensible. As it performs basically two tasks: **collecting**/pre-computing the power consumption metrics and **shipping** it, it is composed of two main components: a **sensor** and an **exporter**. Each can be implemented in different wats, to match a certain use case. When you run scaphandre from the command line, `-s` allows you to choose the sensor you want to use, and the next subcommand is the name of the exporter.
Scaphandre is designed to be extensible. As it performs basically two tasks: **collecting**/pre-computing the power consumption metrics and **publishing** it, it is composed of two main components: a **sensor** and an **exporter**. Each can be implemented in different ways to match a certain use case. When you run scaphandre from the command line, `-s` allows you to choose the sensor you want to use, and the next subcommand is the name of the exporter.

## Sensors

Expand All @@ -9,7 +9,7 @@ Sensors are meant to:
1. get the power consumptions metrics of the host
2. make it available for the exporter

The [PowercapRAPL](../references/sensors-powercap_rapl.md) for instance, gets and transforms metrics coming from the powercap Linux kernel module, that serves as an interface to get the data from the [RAPL](https://01.org/blogs/2014/running-average-power-limit-%E2%80%93-rapl) feature of x86 CPUs. Because this feature is only accessible when you are running on a bare metal machine, this sensor will not work in a virtual machine, except if you first run scaphandre on the hypervisor and make the VM metrics available, with the [qemu exporter](../references/exporter-qemu.md), to scaphandre running inside the virtual machine.
The [PowercapRAPL](../references/sensor-powercap_rapl.md) for instance, gets and transforms metrics coming from the powercap Linux kernel module, that serves as an interface to get the data from the [RAPL](https://01.org/blogs/2014/running-average-power-limit-%E2%80%93-rapl) feature of x86 CPUs. Because this feature is only accessible when you are running on a bare metal machine, this sensor will not work in a virtual machine, except if you first run scaphandre on the hypervisor and make the VM metrics available, with the [qemu exporter](../references/exporter-qemu.md), to scaphandre running inside the virtual machine.

When you don't have access to the hypervisor/bare-metal machine (ie. when you run on public cloud instances and your provider doesn't run scaphandre) you still have the option to estimate the power consumption, based on both the ressources (cpu/gpu/ram/io...) consumed by the virtual machine at a given time, and the characteristics of the underlying hardware. This is the way we are designing the future [estimation-based sensor](https://github.com/hubblo-org/scaphandre/issues/25), to match that use case.

Expand All @@ -22,6 +22,6 @@ An exporter is expected to:
1. ask the sensors to get new metrics and store them for later, potential usage
2. export the current metrics

The [Stdout](../references/exporter-stdout.md) exporter exposes the metrics on the standard output (in your terminal). The [prometheus](../references/exporter-prometheus.md) exporter exposes the metrics on an http endpoint, to be scraped by a [prometheus](https://prometheus.io) instance. An exporter should be created for each monitoring scenario (do you want to feed your favorite monitoring/data analysis tool with scaphandre metrics ? feel free to open a [PR](https://github.com/hubblo-org/scaphandre/pulls) to create a new exporter !).
The [Stdout](../references/exporter-stdout.md) exporter exposes the metrics on the standard output (in your terminal). The [prometheus](../references/exporter-prometheus.md) exporter exposes the metrics on an HTTP endpoint, to be scraped by a [prometheus](https://prometheus.io) instance. An exporter should be created for each monitoring scenario (do you want to feed your favorite monitoring/data analysis tool with scaphandre metrics ? feel free to open a [PR](https://github.com/hubblo-org/scaphandre/pulls) to create a new exporter !).

As introduced in the [sensors](#sensors) section, the [Qemu](../references/exporter-qemu.md) exporter, is very specific. It is only intended to collect metrics related to running virtual machines on a Qemu/KVM hypervisor. Those metrics can then be made available to each virtual machine and it's own scaphandre instance, running the [PowercapRAPL](../references/sensor-powercap_rapl.md) sensor (with the `--vm` flag on). The qemu exporter puts VM's metrics in files the same way the powercap kernel module does it. It mimics this behavior, so the sensor can act the same way it would on a bare metal machine.
As introduced in the [sensors](#sensors) section, the [Qemu](../references/exporter-qemu.md) exporter, is very specific. It is only intended to collect metrics related to running virtual machines on a Qemu/KVM hypervisor. Those metrics can then be made available to each virtual machine and their own scaphandre instance, running the [PowercapRAPL](../references/sensor-powercap_rapl.md) sensor (with the `--vm` flag on). The qemu exporter puts VM's metrics in files the same way the powercap kernel module does it. It mimics this behavior, so the sensor can act the same way it would on a bare metal machine.

0 comments on commit 8273ced

Please sign in to comment.