Welcome
Awesome Dat
Dat Applications
datproject/dat
datproject/dat-desktop
Community Applications
codeforscience/sciencefair
mafintosh/hyperirc
jondashkyle/soundcloud-archiver
mafintosh/hypervision
joehand/hypertweet
beakerbrowser/dat-photos-app
High-Level APIs
datproject/dat-node
datproject/dat-js
beakerbrowser/pauls-dat-api
beakerbrowser/node-dat-archive
Hosting & Dat Management
mafintosh/hypercore-archiver
datprotocol/hypercloud
beakerbrowser/hashbase
joehand/dat-now
mafintosh/hypercore-archiver-bot
joehand/hypercore-archiver-ws
datproject/dat-registry-api
datproject/dat-registry-client
Managing & Aggregating Dats
datproject/multidat
datproject/multidrive
jayrbolton/dat-pki
beakerbrowser/injestdb
Http Hosting
joehand/hyperdrive-http
beakerbrowser/dathttpd
Dat Link Utilties
datprotocol/dat-dns
joehand/dat-link-resolve
pfrazee/parse-dat-url
juliangruber/dat-encoding
Dat Utilities
joehand/dat-log
mafintosh/dat-ls
karissa/hyperhealth
joehand/hyperdrive-network-speed
File Imports & Exports
juliangruber/hyperdrive-import-files
mafintosh/mirror-folder
pfrazee/hyperdrive-staging-area
pfrazee/hyperdrive-to-zip-stream
Hypercore Tools
mafintosh/hyperpipe
Dat Core Modules
mafintosh/hyperdrive
mafintosh/hypercore
CLI Utilities
joehand/dat-doctor
joehand/dat-ignore
joehand/dat-json
Networking
karissa/hyperdiscovery
mafintosh/discovery-swarm
mafintosh/webrtc-swarm
joehand/dat-swarm-defaults
Lower level networking modules
maxogden/discovery-channel
mafintosh/dns-discovery
mafintosh/multicast-dns
webtorrent/bittorrent-dht
mafintosh/utp-native
mafintosh/signalhub
Storage
datproject/dat-storage
datproject/dat-secret-storage
Random Access
juliangruber/abstract-random-access
mafintosh/multi-random-access
mafintosh/random-access-file
mafintosh/random-access-memory
mafintosh/random-access-page-files
datproject/dat-http
substack/random-access-idb
Other Related Dat Project Modules
mafintosh/peer-network
mafintosh/hyperdht
Dat Project Organization Stuff
datproject/datproject.org
datproject/discussions
datproject/design
datproject/dat-elements
kriesse/dat-colors
kriesse/dat-icons
juliangruber/dat.json
Outdated
juliangruber/dat.haus
poga/hyperfeed
yoshuawuyts/normcore
yoshuawuyts/github-to-hypercore
poga/hyperspark
juliangruber/hypercore-index
juliangruber/hyperdrive-encoding
mafintosh/hyperdrive-http-server
joehand/hyperdrive-http
joehand/dat-push
joehand/dat-backup
joehand/archiver-server
joehand/archiver-api
poga/hyperdrive-ln
substack/hyperdrive-multiwriter
substack/hyperdrive-named-archives
substack/git-dat
CfABrigadePhiladelphia/jawn
maxogden/dat-archiver
juliangruber/hyperdrive-stats
karissa/hypercore-stats-server
mafintosh/hypercore-stats-ui
karissa/zip-to-hyperdrive
joehand/url-dat
joehand/tar-dat
joehand/hyperdrive-duplicate

The open source p2p desktop science library that puts users in control.

API stability Latest release MIT license

We've released :balloon: v1.0 :balloon:! But we're just getting started. Check out the roadmap to see where we're headed.


Why ScienceFair?

How we access, read and reuse scientific literature is largely controlled by a few vast publishing organisations. Many wonderful innovations are being explored outside those organisations, but they are rarely integrated into the platforms where people actually access science.

We have a vision of a different, better, future for science. A future that's more fair, inclusive and open. A future where people can explore and innovate and where users control and customise their experience.

ScienceFair aims to help pave the road to that future. The main thing that sets it apart? Freedom from centralised control.


We're creating a desktop experience for discovering, tracking, collecting and reading scientific articles that:

  • is completely free from external control (e.g. by publishers or platforms)
  • helps decentralise the distribution and storage of the scholarly literature
  • allows the user to customise their experience
  • promotes and integrates open data and metadata
  • helps grow an ecosystem of open source tools around scientific literature

contents

downloads

You can download installers or bundled apps for Windows, Mac and Linux from the releases page.

Please note that ScienceFair is currently pre-release, so there will be bugs - we're working hard to polish it to v1 release standard. If you'd like to report bugs in the issue tracker, that would be super helpful.

technical details

Some of the things that ScienceFair does differently:

A reading experience optimised for Science

We use the beautiful Lens reader to render JATS XML to a reading experience optimised for scientific papers.

reader

Instant search of your local collection and remote datasources, only downloading the data requested.

results

Secure, flexible, distributed datasources

A ScienceFair datasource can be a journal, a curated community collection, a personal reading list... anything you like.

v1.0 comes with the eLife journal by default, and more will follow very soon.

Datasources are append-only feeds of JATS XML articles, signed with public-key encryption and distributed peer-to-peer (using dat). This means:

  • downloads come from the nearest, fastest sources
  • it doesn't matter if the original source goes offline
  • only the original creator can add new content
  • anyone can create a datasource (tools to make this easy coming soon)
  • your local collection of articles is ready for data mining

And importantly, datasources you create are private unless you decide to share them, and nobody can ever take a datasource offline.

Built-in bibliometrics and analytics

Basic bibliometrics are built-in in v1.0.

Full analysis and data-mining tools, alt-metrics and enriched annotation will be coming soon.

selection

ScienceFair also follows a few simple design principles that we feel are missing from the ecosystem:

  • we keep the interface minimal and clear
  • incremental discovery is the way
  • be beautiful

home screen

development

This project uses node v7, ideally the latest version. It also uses the two-package.json structure (what??).

To get a local copy working, clone this repo, then run

  • npm install to install dev dependencies
  • cd app && npm install to install regular dependencies
  • cd .. && npm run dev to start in development mode

roadmap

  • [x] v1.0 proof of concept:
    • incorporate major new technologies (dat/hyperdrive, lens reader, instant search)
    • core user experience and design
    • development, packaging and distribution architecture in place
    • 1.0.x releases will be bug fixes and non-breaking improvements
  • [ ] v1.1 focus on datasources:
    • more, and bigger, datasources available by default
    • tools for creating and managing datasources
    • interface for creating and securely sharing p2p collections within the app
    • a platform and interface for discovering and managing datasources
  • [ ] v1.2 focus on enrichment:
    • altmetrics, updates (e.g. retractions), etc. displayed in context in realtime
    • advanced bibliometrics and data-mining tools
    • annotation and commenting, within the app and drawn from existing sources
  • [ ] v2.0 focus on user customisation:
    • a package system, allowing customising and extending key aspects of the experience
    • tools and documentation for making new packages
    • a platform and interface for discovering and managing packages