How to use Cayley in production


#2

Not answering your question in entirety but we have chosen to go with Postgres, because of its already famed durability and optimization. It also supports clustering and for our corporate environment we love to use RDS in AWS.

We have also decided to go with 2 cayley servers, one that is focused on the insertion of data and one that is focused on reading data.

I dont think we will get any sort of true HA out of either postgres or this setup but it does give us a Active/Passive approach if need be. Ultimately I would love to see a powerful distributed key/value store like Cassandra or Accumulo be used but for now Postgres is meeting our need quite nicely.


#3

Is there replication abilities for Cayley? I know for N4J, their free “community” version is full-featured but is not meant to be used in production because they deliberately left out features designed to help it scale: “Cache sharding”, “cluster replication” etc: https://neo4j.com/editions/


Graph Database Fundamentals
#4

@oren I was initially confused by the terms you use: “Cayley to run from the binary - As Application or As Library”

Perhaps these terms are more consistent with how other database websites use it:

  1. Cayley as a server (“Cayley Server”)
  2. Cayley as embedded database (“embedded Cayley”)
  3. Client Libraries (https://github.com/cayleygraph/cayley/blob/master/docs/3rd-Party-APIs.md)

When I first read Cayley “As Library” I kept thinking of client libraries to connect to Cayley server from other languages.


#5

When I had a look at the PHP client library, I noticed it was using Cayley server’s HTTP interface (obviously). The problem is there is lots of additional latency in that approach in a busy website. (i.e. open connection, make request, get response, close connection then repeat process)

MySQL and MongoDB usually keep connections open and the client libraries implement connection pooling.

I was wondering if a future possibility for the Cayley server would be to use Websockets (i.e. using Gorilla websocket: https://github.com/gorilla/websocket) or some other way to create a persistent connection. That should improve performance drastically.

Implementing Websockets would be probably trivial. Other techniques would probably be more difficult.


#6

Sorry about the confusion. That’s why we are planning to add docs for vocabulary so we’ll be on the same page.


#7

Obviously this is confusing too: "Another choice you’ll have to make is what Database you want to use."
Since many people are use to referring to “the database” as the actual product, perhaps to avoid confusion, “backing store” might be better. MySQL uses “Storage Engine”


#8

Agree. I like Storage Engine.


#9

Storage Engine is great


#10

Does the storage engine interface work with a key value database like
Redis? Would Redis be faster that using Cayley with native “in memory” store?


#11

I doubt much is faster than in-memory, but yes, BoltDB is a K/V.


#12

Yeah, any call out to another service… is a call out to another service.

Memory is likely the fastest, hands down. It also exists for ephemeral or load-process-dump workloads - a buddy of mine is writing a compiler that holds the AST this way. And tests.

Could redis be a backend? Sure. Is it an improvement over Bolt even? Unlikely. There’s a case for it in the “redis is our only source of truth” way, but it’s not a high priority at this point.


#13

My question sounds stupid lol
I just thought of an idea to really make Cayley ultra fast - borrowed from how Redis operates.

Just some background:
Redis’s fame is due to being an ultra fast in-memory database that is used heavily for Caching. They provide some persistence capabilities using the techniques outlined in link above (i.e. taking snapshots at regular intervals etc)

Using a storage engine will only slow Cayley down. Perhaps a really cool option would be to primarily advertise Cayley as an in-memory super-fast graph database with ancillary persistence capabilities (if you enable it) using the strorage engines. The storage engines can be used on a snapshotting basis (or the other techniques Redis uses).

The downside is if server crashes, then loading the database takes longer (but Redis does this already and everyone loves it). There are also consistency issues, but people who use it for this mode will be aware of it and accept the risk. Also the database size is limited by server’s memory - but Redis operates like this too.

Alternatively, Option B is what’s happening right now - you use Cayley with a storage engine with full consistency and persistence.

There is a subtle difference between the 2 options, but differentiating between the 2 options is a selling point.

Summary:

  1. Option A: Super-fast in-memory graph database with option to have some persistence
  2. Option B: Fully persistent graph database.

Some users will want Option A (with a bit of persistence), others would want Option A (with no persistence equivalent to people just using in memory Cayley right now) and others would prefer Option B (equivlanet to using a storage engine).


#14

I get what you’re saying.

So there’s a note in my journal to someday write a blog post about temporality of data. The short version is there’s time-series, caches, etc that are supposed to be super fast, but lose usefulness quickly (eg, the exact millisecond someone loaded a webpage). Over time we aggregate these into users and sessions and engagement, which changes more slowly.

Graphs tend toward the latter category; slow moving but highly interrelated data. So, yeah, we could be the fastest on the block, but that sort of misses the point. You want a large graph that’s persistent


#15

@pjebs – my goal is a bit different – I want people to consider Cayley with some persistent store wherever they might consider using a standard DB. For many use-cases, this is better for growth, better for design (schema-last as @barakmich would say) and while it does make some easy cases slightly harder (storing users, aggregation of stats) – it enables a smooth growth and constantly growing interlinked data, which is the world we live in today.


#16

I actually agree with you. What is the rationale for in-memory storage in first place? Was it just used as a proof of concept or for testing when you first started the project?


#17

Little bit of both. And it has its place, in tiny graphs that you might encounter (compile graphs are great examples, again. Live as long as the process, but hugely interrelated). But when we talk about datasets and schema, they’re all persistent


#18

Has anyone looked at or thought about potentially using something like http://druid.io/
It’s been around for sometime and seems like it could fit in all the buckets needed… In memory, Distributed/Scalable, and has backup’s.


#19

The way we store data from Cayley would defeat a lot of what druid invests time in (aggregation, etc). That said, more Storage Engines (backends) are always welcome. We are never going to find “the one” and we don’t want to, the goal is to be a shim on top of multiple storage engines.


#20

btw, here is the Glossary Of Terms page - Glossary Of Terms


#21

Is there a recommended storage engine to use if you plan on keeping the files in S3 (which is like a distributed KV store)?