| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Both in terms of getting the data from the database, and sending it to the
client.
This avoids the use of the after-id and ordering by id when listing builds,
which makes listing builds faster. It does mean that the database reads may
last for a while (which can be a problem), but maybe that can be addressed in
other ways.
|
| |
|
|
|
|
| |
As it's a less well named copy of datastore-list-agent-builds.
|
| |
|
|
|
|
|
|
|
| |
Previously, updating the status was used by the agent just to get back the
list of builds it was already allocated.
Now the status sent is actually stored, along with the 1min load average.
|
|
|
|
|
| |
Move the action in to the coordinator module, so that it happens outside of
the main write transaction.
|
| |
|
| |
|
|
|
|
|
|
| |
This will enable it to join builds to derivations, even if it doesn't know
about the derivation being built, since it'll be able to match the outputs
with other derivations it knows about.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Forcing hooks to be sequential simplifies them, and the implementation, but it
doesn't always scale well. I'm particularly thinking about the build-submitted
hook and built-success hooks, the processing of which can back up if there's
lots of builds being submitted or finishing successfully.
This new functionality allows hooks to be processed in parallel, which should
allow to manage this more effectively.
|
| |
|
|
|
|
| |
Specifically the derivation ordered allocator.
|
|
|
|
|
|
|
|
|
| |
Not sure these are the best terms to use, but I want a way to pause agents,
effectively removing them from the build allocation plan.
This is mostly motivated by the lack of disk space on bayfront, as
deactivating agents provides a way to stop the system from filling up with
builds, but I think there's more general uses as well.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the allocator worked out the derived priorities for each
build. Unfortunately this is quite a complex query, and took lots of time. As
a result of this, I think the WAL could grow excessively while this long query
was running.
To try and mitigate this, add a new table to keep track of the derived
priorities for unprocessed builds. This requires some maintenance to keep up
to date, which in turn will make things like submitting builds slower, but I
think this might help keep transaction length and WAL size down overall.
|
|
|
|
|
| |
Don't keep database connections around forever as this relates to cached query
plans, and also run the optimize pragma when closing connections.
|
|
|
|
|
|
| |
This avoids the need to create agents upfront, which could be useful when
creating many childhurd VMs or using scheduling tools to dynamically run
agents.
|
| |
|
|
|
|
|
| |
This will allow doing things like restricting builds by matching up there tags
to the tags of the agents.
|
| |
|
|
|
|
|
| |
And out of the datastore. This means that datastore code doesn't have too much
logic in it.
|
|
|
|
|
| |
And in to the coordinator module. This will make adding more datastore's
easier.
|
|
|
|
|
|
| |
Don't include canceled builds in the build-for-derivation-exists? or
build-for-output-already-exists? options. I think it makes sense to not
include canceled builds in these options.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Originally I was trying to keep the implementation details of the datastore in
the datastore modules, but this approach starts to crack as you cope with more
and more complicated transactions.
This change should help resolve issues around getting the coordinator logic in
to the coordinator module, and simplifying the SQLite datastore in preparation
for adding PostgreSQL support.
|
| |
|
|
|
|
|
|
|
| |
Some parts of this were quite slow with anything other than a small database,
so instead of doing slow queries on every request, do some slow queries to
setup the metrics, and then change them as part of the regular changes to the
database.
|
|
|
|
|
| |
SQLite's usual approach doesn't seem to always contain the size of the WAL, so
move this logic in to the application and regularly run a checkpoint.
|
|
|
|
|
| |
I particularly want to monitor the WAL growth, as I don't think SQLite's usual
approach to keeping the size down is sufficient.
|
| |
|
|
|
|
|
|
| |
Use a temporary table to avoid computing the priorities for all builds. This
speeds up the allocation to only take a few seconds on the database I'm
testing against.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Substitutes could be available for all direct inputs, but be missing for
things they reference. This could happen if those builds happened on a machine
with the store items available for example.
Therefore, search the entire graph for the relevant derivation when looking
for the derivation to build to provide the missing input.
This change matches up with the similar improvement around handling fetching
substitutes.
|
| |
|
| |
|
|
|
|
|
| |
This can then be used by allocators to avoid allocating builds to agents that
they're never going to fetch.
|
| |
|
|
|
|
|
|
|
| |
This isn't particularly accurate, what's actually being stored is the current
time when the record is inserted in to the coordinator database, but that
should happen just before the agent starts the build, so hopefully that's good
enough.
|
| |
|
|
|
|
|
| |
This is useful to find builds that have failed, and in failing blocked other
builds from being attempted.
|
| |
|
|
|
|
|
|
| |
Don't always substitute the derivation, just fetch it if it doesn't exist in
the database. Also just use the name of the derivation, only read it from the
disk when it needs storing in the database.
|
|
|
|
|
|
|
| |
It worked under some database conditions, but was very slow under others. Move
more of the logic in to SQL in an attempt to make the allocator faster. This
sort of works, but there were some advantages to the approach before the
approach being replaced in this commit.
|
|
|
|
| |
To use with the derivation ordered allocator.
|
|
|
|
|
| |
This will be used by the derivation ordered allocator to find builds which can
be performed.
|
|
|
|
|
|
|
| |
This was resulting in duplicate builds for the same output, as that's not what
it was guarding against, but I think that was my intention... Anyway this
should actually only result in builds being created for outputs that are
required.
|
| |
|
| |
|