Where do I see the current DAG size?

  • Where do I see the current, exact DAG size? Is it being tracked on a website somewhere, or is it something I can check on my miner?

  • The DAG size is actually calculated by a fixed formula. You can find more details here. The function that does the calculation is:

    def get_full_size(block_number):
        sz = DATASET_BYTES_INIT + DATASET_BYTES_GROWTH * (block_number // EPOCH_LENGTH)
        sz -= MIX_BYTES
        while not isprime(sz / MIX_BYTES):
            sz -= 2 * MIX_BYTES
        return sz
    

    To make it easier, look for current block number (eg. 121,000), divide it by 30,000 (so it's 4) and look up the size indexed 4 in the data_sizes array at the end of the link. In our example, it should be 1107293056 bytes.

    This size is the DAG size. However if you meant size of the current DAG file, it should have extra 8 bytes (magic number at the beginning of the file which is 8 bytes, documented here). So the DAG file is 1107293056 + 8 = 1107293064 bytes.

    @Richard's answer is actually the DAG file size.

  • If you need DAG size tracker you can visit investoon.com/tools/dag_size there you can see current DAG size and important epochs.

  • You can check it yourself on your miner:

    $ ls -l ~/.ethash/
    total 1048584
    -rw-rw-r-- 1 richard richard 1073739912 Mar 10 20:36 full-R23-0000000000000000
    ......
    

    (If you're running a standalone miner, then the file is generated locally, not downloaded from somewhere.)

    Thanks! I've been running the miner for a while and noticed that .ethash directory keeps filling up with other full-R23-* files. If the DAG is the all '000' file, what are the others? I see that every 4-5 days or so, there's a new full-R23-* file that gets generated at 1.9G in size.

    They're all DAG files, though only the one with the most recent timestamp is used. The DAG changes every epoch, which is currently 30,000 blocks (~100 hours). You can safely remove the older ones.

    Richard, thanks a bunch for the explanation. Sounds like I need to adjust my cron job to remove all but the latest file for maintenance.

    No problem - glad to be of help.

    @RichardHorrocks, often the most recent timestamp is used, not always. There is a case that ethash generate new DAG file to do smooth transition even though it's not gonna be used right away.

    Ah, the pre-generation - you're right. I'd forgotten about that :-) @JCor1 - as vutran says, you'll need to keep the 2 DAGs with the most recent timestamps, just to be safe. (If you've already removed all but the most recent one, and that's actually a future DAG, then the current DAG should be regenerated when you restart Geth.)

    @RichardHorrocks, from source code of ethash, I believe that you dont have to restart geth to have it regenerated :D

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM