pauls-dat-api
A library of functions that make working with dat / hyperdrive easier. Includes common operations, and some sugars. These functions were factored out of beaker browser's internal APIs.
All async methods work with callbacks and promises. If no callback is provided, a promise will be returned.
Any time a hyperdrive archive is expected, a hyperdrive-staging-area instance can be provided, unless otherwise stated.
var hyperdrive = require('hyperdrive')
var hyperstaging = require('hyperdrive-staging-area')
var archive = hyperdrive('./my-first-hyperdrive-meta') // metadata will be stored in this folder
var staging = hyperstaging(archive, './my-first-hyperdrive') // content will be stored in this folder
await pda.readFile(archive, '/hello.txt') // read the committed hello.txt
await pda.readFile(staging, '/hello.txt') // read the local hello.txt
NOTE: this library is written natively for node 7 and above.
To use with node versions lesser than 7 use:
var pda = require('pauls-dat-api/es5');
const pda = require('pauls-dat-api')
Staging
diff(staging[, opts, cb])
List the differences between staging and its archive.
archiveHyperdrive archive (object).stagingHyperdriveStagingArea instance (object).opts.skipIgnoreDon't use staging's .datignore (bool).- Returns an array of changes.
await pda.diff(staging)
Output looks like:
[
{
change: 'add' | 'mod' | 'del'
type: 'dir' | 'file'
path: String (path of the file)
},
...
]
commit(staging[, opts, cb])
Apply the changes in staging to its archive.
stagingHyperdriveStagingArea instance (object).opts.skipIgnoreDon't use staging's .datignore (bool).- Returns an array of the changes applied.
await pda.commit(staging)
Output looks like:
[
{
change: 'add' | 'mod' | 'del'
type: 'dir' | 'file'
path: String (path of the file)
},
...
]
revert(staging[, opts, cb])
Revert the changes in staging to reflect the state of its archive.
stagingHyperdriveStagingArea instance (object).opts.skipIgnoreDon't use staging's .datignore (bool).- Returns an array of the changes that were reverted.
await pda.revert(staging)
Output looks like:
[
{
change: 'add' | 'mod' | 'del'
type: 'dir' | 'file'
path: String (path of the file)
},
...
]
Lookup
stat(archive, name[, cb])
archiveHyperdrive archive (object).nameEntry name (string).- Returns a Hyperdrive Stat entry (object).
- Throws NotFoundError
// by name:
var st = await pda.stat(archive, '/dat.json')
st.isDirectory()
st.isFile()
console.log(st) /* =>
Stat {
dev: 0,
nlink: 1,
rdev: 0,
blksize: 0,
ino: 0,
mode: 16877,
uid: 0,
gid: 0,
size: 0,
offset: 0,
blocks: 0,
atime: 2017-04-10T18:59:00.147Z,
mtime: 2017-04-10T18:59:00.147Z,
ctime: 2017-04-10T18:59:00.147Z,
linkname: undefined } */
Read
readFile(archive, name[, opts, cb])
archiveHyperdrive archive (object).nameEntry path (string).opts. Options (object|string). If a string, will act asopts.encoding.opts.encodingDesired output encoding (string). May be 'binary', 'utf8', 'hex', or 'base64'. Default 'utf8'.- Returns the content of the file in the requested encoding.
- Throws NotFoundError, NotAFileError.
var manifestStr = await pda.readFile(archive, '/dat.json')
var imageBase64 = await pda.readFile(archive, '/favicon.png', 'base64')
readdir(archive, path[, opts, cb])
archiveHyperdrive archive (object).pathTarget directory path (string).opts.recursiveRead all subfolders and their files as well?- Returns an array of file and folder names.
var listing = await pda.readdir(archive, '/assets')
console.log(listing) // => ['profile.png', 'styles.css']
var listing = await pda.readdir(archive, '/', { recursive: true })
console.log(listing) /* => [
'index.html',
'assets',
'assets/profile.png',
'assets/styles.css'
]*/
Write
writeFile(archive, name, data[, opts, cb])
archiveHyperdrive archive (object).nameEntry path (string).dataData to write (string|Buffer).opts. Options (object|string). If a string, will act asopts.encoding.opts.encodingDesired file encoding (string). May be 'binary', 'utf8', 'hex', or 'base64'. Default 'utf8' ifdatais a string, 'binary' ifdatais a Buffer.- Throws ArchiveNotWritableError, InvalidPathError, EntryAlreadyExistsError, ParentFolderDoesntExistError, InvalidEncodingError.
await pda.writeFile(archive, '/hello.txt', 'world', 'utf8')
await pda.writeFile(archive, '/profile.png', fs.readFileSync('/tmp/dog.png'))
mkdir(archive, name[, cb])
archiveHyperdrive archive (object).nameDirectory path (string).- Throws ArchiveNotWritableError, InvalidPathError, EntryAlreadyExistsError, ParentFolderDoesntExistError, InvalidEncodingError.
await pda.mkdir(archive, '/stuff')
copy(archive, sourceName, targetName[, cb])
archiveHyperdrive archive (object).sourceNamePath to file or directory to copy (string).targetNameWhere to copy the file or folder to (string).- Throws ArchiveNotWritableError, InvalidPathError, EntryAlreadyExistsError, ParentFolderDoesntExistError, InvalidEncodingError.
// copy file:
await pda.copy(archive, '/foo.txt', '/foo.txt.back')
// copy folder:
await pda.copy(archive, '/stuff', '/stuff-copy')
rename(archive, sourceName, targetName[, cb])
archiveHyperdrive archive (object).sourceNamePath to file or directory to rename (string).targetNameWhat the file or folder should be named (string).- Throws ArchiveNotWritableError, InvalidPathError, EntryAlreadyExistsError, ParentFolderDoesntExistError, InvalidEncodingError.
This is equivalent to moving a file/folder.
// move file:
await pda.copy(archive, '/foo.txt', '/foo.md')
// move folder:
await pda.rename(archive, '/stuff', '/things')
Delete
unlink(archive, name[, cb])
archiveHyperdrive archive (object).nameEntry path (string).- Throws ArchiveNotWritableError, NotFoundError, NotAFileError
await pda.unlink(archive, '/hello.txt')
rmdir(archive, name[, opts, cb])
archiveHyperdrive archive (object).nameEntry path (string).opts.recursiveDelete all subfolders and files if the directory is not empty.- Throws ArchiveNotWritableError, NotFoundError, NotAFolderError, DestDirectoryNotEmpty
await pda.rmdir(archive, '/stuff', {recursive: true})
Network
download(archive, name[, cb])
archiveHyperdrive archive (object). Can not be a staging object.nameEntry path (string). Can point to a file or folder.
Download an archive file or folder-tree.
// download a specific file:
await pda.download(archive, '/foo.txt')
// download a specific folder and all children:
await pda.download(archive, '/bar/')
// download the entire archive:
await pda.download(archive, '/')
Activity Streams
createFileActivityStream(archive[, staging, path])
archiveHyperdrive archive (object).stagingHyperdriveStagingArea instance (object).pathEntry path (string) or anymatch pattern (array of strings). If falsy, will watch all files.- Returns a Readable stream.
Watches the given path or path-pattern for file events, which it emits as an emit-stream. Supported events:
['changed',{path}]- The contents of the file has changed, either by a local write or a remote write. The new content will be ready when this event is emitted.pathis the path-string of the file.['invalidated',{path}]- The contents of the file has changed remotely, but hasn't been downloaded yet.pathis the path-string of the file.
An archive will emit "invalidated" first, when it receives the new metadata for the file. It will then emit "changed" when the content arrives. (A local archive will not emit "invalidated.")
var es = pda.createFileActivityStream(archive)
var es = pda.createFileActivityStream(archive, 'foo.txt')
var es = pda.createFileActivityStream(archive, ['**/*.txt', '**/*.md'])
es.on('data', ([event, args]) => {
if (event === 'invalidated') {
console.log(args.path, 'has been invalidated')
pda.download(archive, args.path)
} else if (event === 'changed') {
console.log(args.path, 'has changed')
}
})
// alternatively, via emit-stream:
var emitStream = require('emit-stream')
var events = emitStream(es)
events.on('invalidated', args => {
console.log(args.path, 'has been invalidated')
pda.download(archive, args.path)
})
events.on('changed', args => {
console.log(args.path, 'has changed')
})
createNetworkActivityStream(archive)
archiveHyperdrive archive (object). Can not be a staging object.- Returns a Readable stream.
Watches the archive for network events, which it emits as an emit-stream. Supported events:
['network-changed',{connections}]- The number of connections has changed.connectionsis a number.['download',{feed,block,bytes}]- A block has been downloaded.feedwill either be "metadata" or "content".blockis the index of data downloaded.bytesis the number of bytes in the block.['upload',{feed,block,bytes}]- A block has been uploaded.feedwill either be "metadata" or "content".blockis the index of data downloaded.bytesis the number of bytes in the block.['sync',{feed}]- All known blocks have been downloaded.feedwill either be "metadata" or "content".
var es = pda.createNetworkActivityStream(archive)
es.on('data', ([event, args]) => {
if (event === 'network-changed') {
console.log('Connected to %d peers', args.connections)
} else if (event === 'download') {
console.log('Just downloaded %d bytes (block %d) of the %s feed', args.bytes, args.block, args.feed)
} else if (event === 'upload') {
console.log('Just uploaded %d bytes (block %d) of the %s feed', args.bytes, args.block, args.feed)
} else if (event === 'sync') {
console.log('Finished downloading', args.feed)
}
})
// alternatively, via emit-stream:
var emitStream = require('emit-stream')
var events = emitStream(es)
events.on('network-changed', args => {
console.log('Connected to %d peers', args.connections)
})
events.on('download', args => {
console.log('Just downloaded %d bytes (block %d) of the %s feed', args.bytes, args.block, args.feed)
})
events.on('upload', args => {
console.log('Just uploaded %d bytes (block %d) of the %s feed', args.bytes, args.block, args.feed)
})
events.on('sync', args => {
console.log('Finished downloading', args.feed)
})
Exporters
exportFilesystemToArchive(opts[, cb])
opts.srcPathSource path in the filesystem (string). Required.opts.dstArchiveDestination archive (object). Required.opts.dstPathDestination path within the archive. Optional, defaults to '/'.opts.ignoreFiles not to copy (array of strings). Optional. Uses anymatch.opts.inplaceImportShould import source directory in-place? (boolean). If true and importing a directory, this will cause the directory's content to be copied directy into thedstPath. If false, will cause the source-directory to become a child of thedstPath.opts.dryRunDon't actually copy (boolean). If true, will run all export logic without actually modifying the target archive.- Returns stats on the export.
Copies a file-tree into an archive.
The dryRun opt is useful because this method compares the source files to the destination before copying. Therefore the stats returned by a dry run gives you a file-level diff.
var stats = await pda.exportFilesystemToArchive({
srcPath: '/tmp/mystuff',
dstArchive: archive,
inplaceImport: true
})
console.log(stats) /* => {
addedFiles: ['fuzz.txt', 'foo/bar.txt'],
updatedFiles: ['something.txt'],
skipCount: 3, // files skipped due to the target already existing
fileCount: 3,
totalSize: 400 // bytes
}*/
exportArchiveToFilesystem(opts[, cb])
opts.srcArchiveSource archive (object). Required.opts.dstPathDestination path in the filesystem (string). Required.opts.srcPathSource path within the archive. Optional, defaults to '/'.opts.ignoreFiles not to copy (array of strings). Optional. Uses anymatch.opts.overwriteExistingProceed if the destination isn't empty (boolean). Default false.opts.skipUndownloadedFilesIgnore files that haven't been downloaded yet (boolean). Default false. If false, will wait for source files to download.- Returns stats on the export.
Copies an archive into the filesystem.
NOTE
- Unlike exportFilesystemToArchive, this will not compare the target for equality before copying. If
overwriteExistingis true, it will simply copy all files again.
var stats = await pda.exportArchiveToFilesystem({
srcArchive: archive,
dstPath: '/tmp/mystuff',
skipUndownloadedFiles: true
})
console.log(stats) /* => {
addedFiles: ['fuzz.txt', 'foo/bar.txt'],
updatedFiles: ['something.txt'],
fileCount: 3,
totalSize: 400 // bytes
}*/
exportArchiveToArchive(opts[, cb])
opts.srcArchiveSource archive (object). Required.opts.dstArchiveDestination archive (object). Required.opts.srcPathSource path within the source archive (string). Optional, defaults to '/'.opts.dstPathDestination path within the destination archive (string). Optional, defaults to '/'.opts.ignoreFiles not to copy (array of strings). Optional. Uses anymatch.opts.skipUndownloadedFilesIgnore files that haven't been downloaded yet (boolean). Default false. If false, will wait for source files to download.
Copies an archive into another archive.
NOTE
- Unlike exportFilesystemToArchive, this will not compare the target for equality before copying. It copies files indescriminately.
var stats = await pda.exportArchiveToArchive({
srcArchive: archiveA,
dstArchive: archiveB,
skipUndownloadedFiles: true
})
console.log(stats) /* => {
addedFiles: ['fuzz.txt', 'foo/bar.txt'],
updatedFiles: ['something.txt'],
fileCount: 3,
totalSize: 400 // bytes
}*/
Manifest
readManifest(archive[, cb])
archiveHyperdrive archive (object).
A sugar to get the manifest object.
var manifestObj = await pda.readManifest(archive)
writeManifest(archive, manifest[, cb])
archiveHyperdrive archive (object).manifestManifest values (object).
A sugar to write the manifest object.
await pda.writeManifest(archive, { title: 'My dat!' })
updateManifest(archive, manifest[, cb])
archiveHyperdrive archive (object).manifestManifest values (object).
A sugar to modify the manifest object.
await pda.writeManifest(archive, { title: 'My dat!', description: 'the desc' })
await pda.writeManifest(archive, { title: 'My new title!' }) // preserves description
generateManifest(opts)
optsManifest options (object).
Helper to generate a manifest object. Opts in detail:
{
url: String, the dat's url
title: String
description: String
author: String
version: String
forkOf: String, the forked-from dat's url
createdBy: String, the url of the app that created the dat
}
See: https://github.com/datprotocol/dat.json
Helpers
findEntryByContentBlock(archive, block)
archiveHyperdrive archive (object).blockContent-block index- Returns a Promise for
{name:, start:, end:}
Runs a binary search to find the file-entry that the given content-block index belongs to.
await pda.findEntryByContentBlock(archive, 5)
/* => {
name: '/foo.txt',
start: 4,
end: 6
}*/