Welcome
Awesome Dat
Dat Applications
datproject/dat
datproject/dat-desktop
Community Applications
codeforscience/sciencefair
mafintosh/hyperirc
jondashkyle/soundcloud-archiver
mafintosh/hypervision
joehand/hypertweet
beakerbrowser/dat-photos-app
High-Level APIs
datproject/dat-node
datproject/dat-js
beakerbrowser/pauls-dat-api
beakerbrowser/node-dat-archive
Hosting & Dat Management
mafintosh/hypercore-archiver
datprotocol/hypercloud
beakerbrowser/hashbase
joehand/dat-now
mafintosh/hypercore-archiver-bot
joehand/hypercore-archiver-ws
datproject/dat-registry-api
datproject/dat-registry-client
Managing & Aggregating Dats
datproject/multidat
datproject/multidrive
jayrbolton/dat-pki
beakerbrowser/injestdb
Http Hosting
joehand/hyperdrive-http
beakerbrowser/dathttpd
Dat Link Utilties
datprotocol/dat-dns
joehand/dat-link-resolve
pfrazee/parse-dat-url
juliangruber/dat-encoding
Dat Utilities
joehand/dat-log
mafintosh/dat-ls
karissa/hyperhealth
joehand/hyperdrive-network-speed
File Imports & Exports
juliangruber/hyperdrive-import-files
mafintosh/mirror-folder
pfrazee/hyperdrive-staging-area
pfrazee/hyperdrive-to-zip-stream
Hypercore Tools
mafintosh/hyperpipe
Dat Core Modules
mafintosh/hyperdrive
mafintosh/hypercore
CLI Utilities
joehand/dat-doctor
joehand/dat-ignore
joehand/dat-json
Networking
karissa/hyperdiscovery
mafintosh/discovery-swarm
mafintosh/webrtc-swarm
joehand/dat-swarm-defaults
Lower level networking modules
maxogden/discovery-channel
mafintosh/dns-discovery
mafintosh/multicast-dns
webtorrent/bittorrent-dht
mafintosh/utp-native
mafintosh/signalhub
Storage
datproject/dat-storage
datproject/dat-secret-storage
Random Access
juliangruber/abstract-random-access
mafintosh/multi-random-access
mafintosh/random-access-file
mafintosh/random-access-memory
mafintosh/random-access-page-files
datproject/dat-http
substack/random-access-idb
Other Related Dat Project Modules
mafintosh/peer-network
mafintosh/hyperdht
Dat Project Organization Stuff
datproject/datproject.org
datproject/discussions
datproject/design
datproject/dat-elements
kriesse/dat-colors
kriesse/dat-icons
juliangruber/dat.json
Outdated
juliangruber/dat.haus
poga/hyperfeed
yoshuawuyts/normcore
yoshuawuyts/github-to-hypercore
poga/hyperspark
juliangruber/hypercore-index
juliangruber/hyperdrive-encoding
mafintosh/hyperdrive-http-server
joehand/hyperdrive-http
joehand/dat-push
joehand/dat-backup
joehand/archiver-server
joehand/archiver-api
poga/hyperdrive-ln
substack/hyperdrive-multiwriter
substack/hyperdrive-named-archives
substack/git-dat
CfABrigadePhiladelphia/jawn
maxogden/dat-archiver
juliangruber/hyperdrive-stats
karissa/hypercore-stats-server
mafintosh/hypercore-stats-ui
karissa/zip-to-hyperdrive
joehand/url-dat
joehand/tar-dat
joehand/hyperdrive-duplicate

hypercore-archiver

Easily archive multiple hypercores or hyperdrives

Usage

var archiver = require('hypercore-archiver')
var hypercore = require('hypercore')

var ar = archiver('./my-archiver') // also supports passing in a storage provider
var feed = hypercore('./my-feed')

feed.on('ready', function () {
  ar.add(feed.key, function (err) {
    console.log('will now archive the feed')
  })
})

ar.on('sync', function (feed) {
  console.log('feed is synced', feed.key)
})

// setup replication
var stream = ar.replicate()
stream.pipe(feed.replicate({live: true})).pipe(stream)

feed.append(['hello', 'world'])

API

var ar = archiver(storage, [key], [options])

Create a new archvier. storage can be a file system path or a storage provider like random-access-memory.

If this archiver is a clone of another archiver pass the changes feed key as the 2nd argument.

Options include

{
  sparse: false // set to true to only archive blocks you request
}
Sparse File Storage

The sparse option uses sparse file mode, only availalbe on some file systems. It will appear as a full size file but only take up the space actually used on disk.

  • Use ls -alsh to view the actual size (first column)
  • sparse file mode (APFS) is not available on Mac OSX.

ar.add(key, [callback])

Add a new hypercore or hyperdrive key to be archived.

ar.remove(key, [callback])

Remove a key.

ar.list(callback)

List all hypercores and hyperdrives being archived.

ar.get(key, callback)

Retrieve the feed being archived. If the key points to a hyperdrive the callback is called with (err, metadataFeed, contentFeed)

ar.changes

A changes feed containing the archiver state. Pass the changes feed key to another hypercore archiver to replicate the archiver and all feeds

var stream = ar.replicate([options])

Create a replication stream. Per defaults the archiver will replicate any feed the remote asks for. To have the archiver ask to replicate one pass in {key: feedKey} as an option.

ar.on('add', feed)

Emitted when a feed is being added

ar.on('remove', feed)

Emitted when a feed is being removed

ar.on('sync', feed)

Emitted when a feed has been fully synced

ar.on('download', feed, index, data, peer)

Emitted when the archiver downloads a block of data

ar.on('upload', feed, index, data, peer)

Emitted when the archiver uploads a block of data

ar.on('ready')

Emitted when all internal state has been loaded (the changes feed will be set). You do not have to wait for this event before calling any async function.

Network Swarm

The archiver comes with a network swarm as well. This will make the archiver replicate over the internet and local network. To use it do:

var swarm = require('hypercore-archiver/swarm')
swarm(archiver)

License

MIT