Welcome
Awesome Dat
Dat Applications
datproject/dat
datproject/dat-desktop
Community Applications
codeforscience/sciencefair
mafintosh/hyperirc
jondashkyle/soundcloud-archiver
mafintosh/hypervision
joehand/hypertweet
beakerbrowser/dat-photos-app
High-Level APIs
datproject/dat-node
datproject/dat-js
beakerbrowser/pauls-dat-api
beakerbrowser/node-dat-archive
Hosting & Dat Management
mafintosh/hypercore-archiver
datprotocol/hypercloud
beakerbrowser/hashbase
joehand/dat-now
mafintosh/hypercore-archiver-bot
joehand/hypercore-archiver-ws
datproject/dat-registry-api
datproject/dat-registry-client
Managing & Aggregating Dats
datproject/multidat
datproject/multidrive
jayrbolton/dat-pki
beakerbrowser/injestdb
Http Hosting
joehand/hyperdrive-http
beakerbrowser/dathttpd
Dat Link Utilties
datprotocol/dat-dns
joehand/dat-link-resolve
pfrazee/parse-dat-url
juliangruber/dat-encoding
Dat Utilities
joehand/dat-log
mafintosh/dat-ls
karissa/hyperhealth
joehand/hyperdrive-network-speed
File Imports & Exports
juliangruber/hyperdrive-import-files
mafintosh/mirror-folder
pfrazee/hyperdrive-staging-area
pfrazee/hyperdrive-to-zip-stream
Hypercore Tools
mafintosh/hyperpipe
Dat Core Modules
mafintosh/hyperdrive
mafintosh/hypercore
CLI Utilities
joehand/dat-doctor
joehand/dat-ignore
joehand/dat-json
Networking
karissa/hyperdiscovery
mafintosh/discovery-swarm
mafintosh/webrtc-swarm
joehand/dat-swarm-defaults
Lower level networking modules
maxogden/discovery-channel
mafintosh/dns-discovery
mafintosh/multicast-dns
webtorrent/bittorrent-dht
mafintosh/utp-native
mafintosh/signalhub
Storage
datproject/dat-storage
datproject/dat-secret-storage
Random Access
juliangruber/abstract-random-access
mafintosh/multi-random-access
mafintosh/random-access-file
mafintosh/random-access-memory
mafintosh/random-access-page-files
datproject/dat-http
substack/random-access-idb
Other Related Dat Project Modules
mafintosh/peer-network
mafintosh/hyperdht
Dat Project Organization Stuff
datproject/datproject.org
datproject/discussions
datproject/design
datproject/dat-elements
kriesse/dat-colors
kriesse/dat-icons
juliangruber/dat.json
Outdated
juliangruber/dat.haus
poga/hyperfeed
yoshuawuyts/normcore
yoshuawuyts/github-to-hypercore
poga/hyperspark
juliangruber/hypercore-index
juliangruber/hyperdrive-encoding
mafintosh/hyperdrive-http-server
joehand/hyperdrive-http
joehand/dat-push
joehand/dat-backup
joehand/archiver-server
joehand/archiver-api
poga/hyperdrive-ln
substack/hyperdrive-multiwriter
substack/hyperdrive-named-archives
substack/git-dat
CfABrigadePhiladelphia/jawn
maxogden/dat-archiver
juliangruber/hyperdrive-stats
karissa/hypercore-stats-server
mafintosh/hypercore-stats-ui
karissa/zip-to-hyperdrive
joehand/url-dat
joehand/tar-dat
joehand/hyperdrive-duplicate

random-access-file

Continuous reading or writing to a file using random offsets and lengths

npm install random-access-file

build status

Why?

If you are receiving a file in multiple pieces in a distributed system it can be useful to write these pieces to disk one by one in various places throughout the file without having to open and close a file descriptor all the time.

random-access-file allows you to do just this.

Usage

var randomAccessFile = require('random-access-file')

var file = randomAccessFile('my-file.txt')

file.write(10, new Buffer('hello'), function(err) {
  // write a buffer to offset 10
  file.read(10, 5, function(err, buffer) {
    console.log(buffer) // read 5 bytes from offset 10
    file.close(function() {
      console.log('file is closed')
    })
  })
})

file will use an open file descriptor. When you are done with the file you should call file.close().

API

var file = randomAccessFile(filename, [options])

Create a new file. Options include:

{
  truncate: false, // truncate the file before reading / writing
  length: someLength, // truncate the file to this size first
  readable: true, // should the file be opened as readable?
  writable: true  // should the file be opened as writable?
}

file.write(offset, buffer, [callback])

Write a buffer at a specific offset.

file.read(offset, length, callback)

Read a buffer at a specific offset. Callback is called with the buffer read.

file.del(offset, length, callback)

Will truncate the file if offset + length is larger than the current file length. Is otherwise a noop.

file.end([options], callback)

Call this method when the entire file has been written. Options include:

{
  mtime: mtime, // set the file's mtime
  atime: atime // set the file's atime
}

file.close([callback])

Close the underlying file descriptor.

file.unlink([callback])

Unlink the underlying file.

file.on('open')

Emitted when the file descriptor has been opened. You can access the fd using file.fd. You do not need to wait for this event before doing any reads/writes.

file.on('close')

Emitted when the file has been closed.

License

MIT