Welcome
Awesome Dat
Dat Applications
datproject/dat
datproject/dat-desktop
Community Applications
codeforscience/sciencefair
mafintosh/hyperirc
jondashkyle/soundcloud-archiver
mafintosh/hypervision
joehand/hypertweet
beakerbrowser/dat-photos-app
High-Level APIs
datproject/dat-node
datproject/dat-js
beakerbrowser/pauls-dat-api
beakerbrowser/node-dat-archive
Hosting & Dat Management
mafintosh/hypercore-archiver
datprotocol/hypercloud
beakerbrowser/hashbase
joehand/dat-now
mafintosh/hypercore-archiver-bot
joehand/hypercore-archiver-ws
datproject/dat-registry-api
datproject/dat-registry-client
Managing & Aggregating Dats
datproject/multidat
datproject/multidrive
jayrbolton/dat-pki
beakerbrowser/injestdb
Http Hosting
joehand/hyperdrive-http
beakerbrowser/dathttpd
Dat Link Utilties
datprotocol/dat-dns
joehand/dat-link-resolve
pfrazee/parse-dat-url
juliangruber/dat-encoding
Dat Utilities
joehand/dat-log
mafintosh/dat-ls
karissa/hyperhealth
joehand/hyperdrive-network-speed
File Imports & Exports
juliangruber/hyperdrive-import-files
mafintosh/mirror-folder
pfrazee/hyperdrive-staging-area
pfrazee/hyperdrive-to-zip-stream
Hypercore Tools
mafintosh/hyperpipe
Dat Core Modules
mafintosh/hyperdrive
mafintosh/hypercore
CLI Utilities
joehand/dat-doctor
joehand/dat-ignore
joehand/dat-json
Networking
karissa/hyperdiscovery
mafintosh/discovery-swarm
mafintosh/webrtc-swarm
joehand/dat-swarm-defaults
Lower level networking modules
maxogden/discovery-channel
mafintosh/dns-discovery
mafintosh/multicast-dns
webtorrent/bittorrent-dht
mafintosh/utp-native
mafintosh/signalhub
Storage
datproject/dat-storage
datproject/dat-secret-storage
Random Access
juliangruber/abstract-random-access
mafintosh/multi-random-access
mafintosh/random-access-file
mafintosh/random-access-memory
mafintosh/random-access-page-files
datproject/dat-http
substack/random-access-idb
Other Related Dat Project Modules
mafintosh/peer-network
mafintosh/hyperdht
Dat Project Organization Stuff
datproject/datproject.org
datproject/discussions
datproject/design
datproject/dat-elements
kriesse/dat-colors
kriesse/dat-icons
juliangruber/dat.json
Outdated
juliangruber/dat.haus
poga/hyperfeed
yoshuawuyts/normcore
yoshuawuyts/github-to-hypercore
poga/hyperspark
juliangruber/hypercore-index
juliangruber/hyperdrive-encoding
mafintosh/hyperdrive-http-server
joehand/hyperdrive-http
joehand/dat-push
joehand/dat-backup
joehand/archiver-server
joehand/archiver-api
poga/hyperdrive-ln
substack/hyperdrive-multiwriter
substack/hyperdrive-named-archives
substack/git-dat
CfABrigadePhiladelphia/jawn
maxogden/dat-archiver
juliangruber/hyperdrive-stats
karissa/hypercore-stats-server
mafintosh/hypercore-stats-ui
karissa/zip-to-hyperdrive
joehand/url-dat
joehand/tar-dat
joehand/hyperdrive-duplicate

InjestDB

A peer-to-peer database for dat:// applications. How it works

Example

Setup a database for social profiles, which can publish status updates and like other users' posts.

const Injest = require('injestdb')
var db = new Injest('social-profiles')
db.schema({
  version: 1,
  broadcasts: {
    primaryKey: 'createdAt',
    index: [
      'createdAt',
      '_origin+createdAt' // compound index. '_origin' is an autogenerated attribute which represets the URL of the authoring archive
    ],
    validator: record => {
      assert(typeof record.text === 'string')
      assert(typeof record.createdAt === 'number')
      return record
    }
  },
  likes: {
    primaryKey: 'createdAt',
    index: 'targetUrl',
    validator: record => {
      assert(typeof record.targetUrl === 'string')
      return record
    }
  },
  profile: {
    singular: true,
    index: 'name',
    validator: record => {
      assert(typeof record.name === 'string')
      return {
        name: record.name,
        description: isString(record.description) ? record.description : '',
        avatarUrl: isString(record.avatarUrl) ? record.avatarUrl : ''
      }
    }
  }
})

Next we add source archives to be ingested (added ot the dataset). The source archives are persisted in IndexedDB, so this doesn't have to be done every run.

await db.addArchives([alicesUrl, bobsUrl, carlasDatArchive], {prepare: true})

Now we can begin querying the database for records.

// get the first profile record where name === 'bob'
var bobProfile = await db.profiles.get('name', 'bob')

// get all profile records which match this query
var bobProfiles = await db.profiles
  .where('name')
  .equalsIgnoreCase('bob')
  .toArray()

// get the 30 latest broadcasts from all source archives
var recentBroadcasts = await db.broadcasts
  .orderBy('createdAt')
  .reverse() // most recent first
  .limit(30)
  .toArray()

// get the 30 latest broadcasts by a specific archive (bob)
// - this uses a compound index to filter by origin, and then sort by createdAt
var bobsRecentBroadcasts = await db.broadcasts
  .where('_origin+createdAt')
  .between([bobsUrl, ''], [bobsUrl, '\uffff'])
  .reverse() // most recent first
  .limit(30)
  .toArray()

// get the # of likes for a broadcast
var numLikes = await db.likes
  .where('targetUrl').equals(bobsRecentBroadcasts[0]._url) // _url is an autogenerated attribute which represents the URL of the record
  .count()

We can also use Injest to create, modify, and delete records (and their matching files).

// update bob's name
await db.profiles.update(bobsUrl, {name: 'robert'})

// publish a new broadcast for bob
var broadcastUrl = await db.broadcasts.add(bobsUrl, {
  text: 'Hello!',
  createdAt: Date.now()
})

// modify the broadcast
await db.broadcasts.update(broadcastUrl, {text: 'Hello world!'})

// like the broadcast
await db.likes.add(bobsUrl, {
  targetUrl: broadcastUrl,
  createdAt: Date.now()
})

// delete the broadcast
await db.broadcasts.delete(broadcastUrl)

// delete all likes on the broadcast (that we control)
await db.likes
  .where({targetUrl: broadcastUrl})
  .delete()

TODOs

Injest is still in development.

  • [x] Indexer
  • [x] Core query engine
  • [x] Persisted tables and table reindex on schema change
  • [x] Mutation methods (add/update/delete)
  • [ ] Events
  • [x] Multikey indexes
  • [ ] Validation: filename must match primaryKey on non-singular tables
  • [ ] Support for .or() queries
  • [ ] Complete documentation

API quick reference

var db = new InjestDB(name)
InjestDB.list() => Promise<Void>
InjestDB.delete(name) => Promise<Void>
db.open() => Promise<Void>
db.close() => Promise<Void>
db.schema(Object) => Promise<Void>
db.addArchive(url|DatArchive, {prepare: Boolean}) => Promise<Void>
db.addArchives(Array<url|DatArchive>, {prepare: Boolean}) => Promise<Void>
db.removeArchive(url|DatArchive) => Promise<Void>
db.prepareArchive(url|DatArchive)
db.listArchives() => Promise<url>
db 'open' ()
db 'open-failed' (error)
db 'versionchange' ()
db 'indexes-updated' (archive, archiveVersion)

db.{table} => InjestTable
InjestTable#add(archive, record) => Promise<url>
InjestTable#count() => Promise<Number>
InjestTable#delete(url) => Promise<url>
InjestTable#each(Function) => Promise<Void>
InjestTable#filter(Function) => InjestRecordset
InjestTable#get(url) => Promise<Object>
InjestTable#get(archive) => Promise<Object>
InjestTable#get(archive, key) => Promise<Object>
InjestTable#get(index, value) => Promise<Object>
InjestTable#isRecordFile(String) => Boolean
InjestTable#limit(Number) => InjestRecordset
InjestTable#listRecordFiles(Archive) => Promise<Object>
InjestTable#name => String
InjestTable#offset(Number) => InjestRecordset
InjestTable#orderBy(index) => InjestRecordset
InjestTable#put(url, record) => Promise<url>
InjestTable#reverse() => InjestRecordset
InjestTable#schema => Object
InjestTable#toArray() => Promise<Array>
InjestTable#toCollection() => InjestRecordset
InjestTable#update(record) => Promise<Number>
InjestTable#update(url, updates) => Promise<Number>
InjestTable#update(archive, updates) => Promise<Number>
InjestTable#update(archive, key, updates) => Promise<Number>
InjestTable#upsert(url, record) => Promise<Void | url>
InjestTable#upsert(archive, record) => Promise<Void | url>
InjestTable#where(index) => InjestWhereClause
InjestTable 'index-updated' (archive, archiveVersion)

InjestWhereClause#above(lowerBound) => InjestRecordset
InjestWhereClause#aboveOrEqual(lowerBound) => InjestRecordset
InjestWhereClause#anyOf(Array|...args) => InjestRecordset
InjestWhereClause#anyOfIgnoreCase(Array|...args) => InjestRecordset
InjestWhereClause#below(upperBound) => InjestRecordset
InjestWhereClause#belowOrEqual(upperBound) => InjestRecordset
InjestWhereClause#between(lowerBound, upperBound, {includeUpper, includeLower}) => InjestRecordset
InjestWhereClause#equals(value) => InjestRecordset
InjestWhereClause#equalsIgnoreCase(value) => InjestRecordset
InjestWhereClause#noneOf(Array|...args) => InjestRecordset
InjestWhereClause#notEqual(value) => InjestRecordset
InjestWhereClause#startsWith(value) => InjestRecordset
InjestWhereClause#startsWithAnyOf(Array|...args) => InjestRecordset
InjestWhereClause#startsWithAnyOfIgnoreCase(Array|...args) => InjestRecordset
InjestWhereClause#startsWithIgnoreCase(value) => InjestRecordset

InjestRecordset#clone() => InjestRecordset
InjestRecordset#count() => Promise<Number>
InjestRecordset#delete() => Promise<Number>
InjestRecordset#distinct() => InjestRecordset
InjestRecordset#each(Function) => Promise<Void>
InjestRecordset#eachKey(Function) => Promise<Void>
InjestRecordset#eachUrl(Function) => Promise<Void>
InjestRecordset#filter(Function) => InjestRecordset
InjestRecordset#first() => Promise<Object>
InjestRecordset#keys() => Promise<Array<String>>
InjestRecordset#last() => Promise<Object>
InjestRecordset#limit(Number) => InjestRecordset
InjestRecordset#offset(Number) => InjestRecordset
InjestRecordset#or(index) => InjestWhereClause
InjestRecordset#put(Object) => Promise<Number>
InjestRecordset#urls() => Promise<Array<String>>
InjestRecordset#reverse() => InjestRecordset
InjestRecordset#toArray() => Promise<Array<Object>>
InjestRecordset#uniqueKeys() => Promise<Array<String>>
InjestRecordset#until(Function) => InjestRecordset
InjestRecordset#update(Object|Function) => Promise<Number>
InjestRecordset#where(index) => InjestWhereClause

API

db.schema(definition)

{
  version: Number, // the version # of the schema, should increment by 1 on each change

  [tableName]: {
    // is there only one record-file per archive?
    // - if true, will look for the file at `/${tableName}.json`
    // - if false, will look for files at `/${tableName}/*.json`
    singular: Boolean,

    // attribute to build filenames for newly-created records
    // ie `/${tableName}/${record[primaryKey]}.json`
    // only required if !singular
    primaryKey: String, 

    // specify which fields are indexed for querying
    // each is a keypath, see https://www.w3.org/TR/IndexedDB/#dfn-key-path
    // can specify compound indexes with a + separator in the keypath
    // eg one index               - index: 'firstName' 
    // eg two indexes             - index: ['firstName', 'lastName']
    // eg add a compound index    - index: ['firstName', 'lastName', 'firstName+lastName']
    // eg index an array's values - index: ['firstName', '*favoriteFruits']
    index: String|Array<String>,

    // validator & sanitizer
    // takes the ingested file (must be valid json)
    // returns the record to be stored
    // returns falsy or throws to not store the record
    validator: Function(Object) => Object
  }
}

How it works

InjestDB abstracts over the DatArchive API to provide a simple database-like interface. It is based heavily on Dexie.js and built using IndexedDB.

Injest works by scanning a set of source archives for files that match a path pattern. Those files are indexed ("ingested") so that they can be queried easily. Injest also provides a simple interface for adding, editing, and removing records on the archives that the local user owns.

Injest sits on top of Dat archives. It duplicates the data it's handling into IndexedDB, and that duplicated data acts as a throwaway cache -- it can be reconstructed at any time from the Dat archives.

Injest treats individual files in the Dat archive as individual records in a table. As a result, there's a direct mapping for each table to a folder of .json files. For instance, if you had a 'tweets' table, it would map to the /tweets/*.json files. Injest's mutators, such as put or add or update, simply write those json files. Injest's readers & query-ers, such as get() or where(), read from the IndexedDB cache.

Injest watches its source archives for changes to the json files. When they change, it reads them and updates IndexedDB, thus the query results stay up-to-date. The flow is, roughly: put() -> archive/tweets/12345.json -> indexer -> indexeddb -> get().