ouroboros-consensus-0.1.0.1: Consensus layer for the Ouroboros blockchain protocol
Safe Haskell None
Language Haskell2010

Ouroboros.Consensus.Storage.ChainDB.Impl.Background

Description

Background tasks:

  • Copying blocks from the VolatileDB to the ImmutableDB
  • Performing and scheduling garbage collections on the VolatileDB
  • Writing snapshots of the LedgerDB to disk and deleting old ones
  • Executing scheduled chain selections
Synopsis

Launch background tasks

launchBgTasks Source #

Arguments

:: forall m blk. ( IOLike m, LedgerSupportsProtocol blk, InspectLedger blk, HasHardForkHistory blk, LgrDbSerialiseConstraints blk)
=> ChainDbEnv m blk
-> Word64

Number of immutable blocks replayed on ledger DB startup

-> m ()

Copying blocks from the VolatileDB to the ImmutableDB

copyAndSnapshotRunner Source #

Arguments

:: forall m blk. ( IOLike m, ConsensusProtocol ( BlockProtocol blk), HasHeader blk, GetHeader blk, IsLedger ( LedgerState blk), LgrDbSerialiseConstraints blk)
=> ChainDbEnv m blk
-> GcSchedule m
-> Word64

Number of immutable blocks replayed on ledger DB startup

-> m Void

Copy blocks from the VolatileDB to ImmutableDB and take snapshots of the LgrDB

We watch the chain for changes. Whenever the chain is longer than k , then the headers older than k are copied from the VolatileDB to the ImmutableDB (using copyToImmutableDB ). Once that is complete,

  • We periodically take a snapshot of the LgrDB (depending on its config). When enough blocks (depending on its config) have been replayed during startup, a snapshot of the replayed LgrDB will be written to disk at the start of this function. NOTE: After this initial snapshot we do not take a snapshot of the LgrDB until the chain has changed again, irrespective of the LgrDB policy.
  • Schedule GC of the VolatileDB ( scheduleGC ) for the SlotNo of the most recent block that was copied.

It is important that we only take LgrDB snapshots when are are sure they have been copied to the ImmutableDB, since the LgrDB assumes that all snapshots correspond to immutable blocks. (Of course, data corruption can occur and we can handle it by reverting to an older LgrDB snapshot, but we should need this only in exceptional circumstances.)

We do not store any state of the VolatileDB GC. If the node shuts down before GC can happen, when we restart the node and schedule the next GC, it will imply any previously scheduled GC, since GC is driven by slot number ("garbage collect anything older than x ").

copyToImmutableDB :: forall m blk. ( IOLike m, ConsensusProtocol ( BlockProtocol blk), HasHeader blk, GetHeader blk, HasCallStack ) => ChainDbEnv m blk -> m ( WithOrigin SlotNo ) Source #

Copy the blocks older than k from the VolatileDB to the ImmutableDB.

These headers of these blocks can be retrieved by dropping the k most recent blocks from the fragment stored in cdbChain .

The copied blocks are removed from the fragment stored in cdbChain .

This function does not remove blocks from the VolatileDB.

The SlotNo of the tip of the ImmutableDB after copying the blocks is returned. This can be used for a garbage collection on the VolatileDB.

NOTE: this function would not be safe when called multiple times concurrently. To enforce thread-safety, a lock is obtained at the start of this function and released at the end. So in practice, this function can be called multiple times concurrently, but the calls will be serialised.

NOTE: this function can run concurrently with all other functions, just not with itself.

updateLedgerSnapshots :: ( IOLike m, LgrDbSerialiseConstraints blk, HasHeader blk, IsLedger ( LedgerState blk)) => ChainDbEnv m blk -> m () Source #

Write a snapshot of the LedgerDB to disk and remove old snapshots (typically one) so that only onDiskNumSnapshots snapshots are on disk.

Executing garbage collection

garbageCollect :: forall m blk. IOLike m => ChainDbEnv m blk -> SlotNo -> m () Source #

Trigger a garbage collection for blocks older than the given SlotNo on the VolatileDB.

Also removes the corresponding cached "previously applied points" from the LedgerDB.

This is thread-safe as the VolatileDB locks itself while performing a GC.

When calling this function it is critical that the blocks that will be garbage collected, which are determined by the slotNo parameter, have already been copied to the immutable DB (if they are part of the current selection).

TODO will a long GC be a bottleneck? It will block any other calls to putBlock and getBlock .

Scheduling garbage collections

data GcParams Source #

Constructors

GcParams

Fields

data GcSchedule m Source #

Scheduled garbage collections

When a block has been copied to the ImmutableDB, we schedule a VolatileDB garbage collection for the slot corresponding to the block in the future. How far in the future is determined by the gcDelay parameter. The goal is to allow some overlap so that the write to the ImmutableDB will have been flushed to disk before the block is removed from the VolatileDB.

We store scheduled garbage collections in a LIFO queue. Since the queue will be very short (see further down for why) and entries are more often added (at the block sync speed by a single thread) than removed (once every gcInterval ), we simply use a StrictSeq stored in a TVar to make reasoning and testing easier. Entries are enqueued at the end (right) and dequeued from the head (left).

The Time s in the queue will be monotonically increasing. A fictional example (with hh:mm:ss):

[(16:01:12, SlotNo 1012), (16:04:38, SlotNo 1045), ..]

Scheduling a garbage collection with scheduleGC will add an entry to the end of the queue for the given slot at the time equal to now ( getMonotonicTime ) + the gcDelay rounded to gcInterval . Unless the last entry in the queue was scheduled for the same rounded time, in that case the new entry replaces the existing entry. The goal of this is to batch garbage collections so that, when possible, at most one garbage collection happens every gcInterval .

For example, starting with an empty queue and gcDelay = 5min and gcInterval = 10s :

At 8:43:22, we schedule a GC for slot 10:

[(8:48:30, SlotNo 10)]

The scheduled time is rounded up to the next interval. Next, at 8:43:24, we schedule a GC for slot 11:

[(8:48:30, SlotNo 11)]

Note that the existing entry is replaced with the new one, as they map to the same gcInterval . Instead of two GCs 2 seconds apart, we will only schedule one GC.

Next, at 8:44:02, we schedule a GC for slot 12:

[(8:48:30, SlotNo 11), (8:49:10, SlotNo 12)]

Now, a new entry was appended to the queue, as it doesn't map to the same gcInterval as the last one.

In other words, everything scheduled in the first 10s will be done after 20s. The bounds are the open-closed interval:

(now + gcDelay, now + gcDelay + gcInterval]

Whether we're syncing at high speed or downloading blocks as they are produced, the length of the queue will be at most ⌈gcDelay / gcInterval⌉ + 1 , e.g., 5min / 10s = 31 entries. The + 1 is needed because we might be somewhere in the middle of a gcInterval .

The background thread will look at head of the queue and wait until that has Time passed. After the wait, it will pop off the head of the queue and perform a garbage collection for the SlotNo in the head. Note that the SlotNo before the wait can be different from the one after the wait, precisely because of batching.

computeTimeForGC Source #

Arguments

:: GcParams
-> Time

Now

-> Time

The time at which to perform the GC

gcScheduleRunner Source #

Arguments

:: forall m. IOLike m
=> GcSchedule m
-> ( SlotNo -> m ())

GC function

-> m Void

scheduleGC Source #

Arguments

:: forall m blk. IOLike m
=> Tracer m ( TraceGCEvent blk)
-> SlotNo

The slot to use for garbage collection

-> GcParams
-> GcSchedule m
-> m ()

Testing

data ScheduledGc Source #

Constructors

ScheduledGc

Fields

Instances

Instances details
Eq ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

Show ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

Generic ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

NoThunks ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

Condense ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

type Rep ScheduledGc Source #
Instance details

Defined in Ouroboros.Consensus.Storage.ChainDB.Impl.Background

type Rep ScheduledGc = D1 (' MetaData "ScheduledGc" "Ouroboros.Consensus.Storage.ChainDB.Impl.Background" "ouroboros-consensus-0.1.0.1-DT4Cvwf63DZKctsEvaJqCU" ' False ) ( C1 (' MetaCons "ScheduledGc" ' PrefixI ' True ) ( S1 (' MetaSel (' Just "scheduledGcTime") ' NoSourceUnpackedness ' SourceStrict ' DecidedStrict ) ( Rec0 Time ) :*: S1 (' MetaSel (' Just "scheduledGcSlot") ' NoSourceUnpackedness ' SourceStrict ' DecidedStrict ) ( Rec0 SlotNo )))

dumpGcSchedule :: IOLike m => GcSchedule m -> STM m [ ScheduledGc ] Source #

Return the current contents of the GcSchedule queue without modifying it.

For testing purposes.

Adding blocks to the ChainDB