Fullstack JavaScript – Neo4j Script Procedures

Posted by Michael Hunger on Apr 1, 2017 in cypher, neo4j

Imagine, being a fullstack JavaScript developer and not just using the language in the frontend, middleware or backend but also to create your user-defined procedures and functions in the database.

Several other databases support a similar approach for views and user defined extensions, and now you can do it with Neo4j too.

Already early last year, Neo4j’s user defined procedures were still in their infancy.
I had just written an article about the Javas JavaScript engine “Nashorn”.

So naturally I experimented with using procedures to dynamically create and run JavaScript functions.

The function mapping is stored in Neo4j’s graph properties.

You could create JavaScript functions with a name and body and then later call them by name and passing parameters along.

CALL scripts.function('users', '
function users(name) {
  return collection(db.findNodes(label("User"),'lastname',name));

CALL scripts.run('users','Anderson') YIELD value as user;

// or call as function, returns a list
RETURN scripts.run('users','Smith');

That worked all quite well, but I didn’t find the time to turn that into a proper project.

Later in the year I got some feature and pull requests on the APOC procedure library to add such functions.

As there are some concerns esp. from corporate users about scripting support, I pulled my work into a separate project: Neo4j Script Procedures

So, when I came across this tweet, it reminded me of wanting to update the project.

I thought it was a good opportunity to upgrade and release the project.

So, now you can try to run JavaScript functions from Neo4j’s Cypher by grabbing the jar-file from the latest release.

Just put it into $NEO4J_HOME/plugins and restart your server.

Note: In Neo4j Community Desktop, there is a directory chooser on the “Options” for the plugins directory)

The of neo4j-script-procedures release does not support Neo4j 3.1.2 as there are some incompatibilities with procedures creating new property-names.
It should work with 3.1.0, 3.1.1 or 3.1.3 though.

Let me know what you think and how we can improve this little useful library, please raise issues on the repository for feedback and problems.

Tags: , ,


Creating a Neo4j Example Graph with the Arrows Tool

Posted by Michael Hunger on Mar 21, 2017 in cypher, import

Some years ago my colleague Alistair Jones created a neat little tool in JavaScript to edit and render example graphs in a consistent way.

It is aptly named Arrows and you can find it here: http://www.apcjones.com/arrows

We mostly use it for presentations, but also to show data models for Neo4j GraphGists and Neo4j Browser Guides.
Because it also stores the positions of nodes, it’s always true to the same layout and doesn’t wiggle around.


Read more…


Academy Awards (Oscars) from Kaggle to Neo4j

Posted by Michael Hunger on Mar 9, 2017 in cypher, import, neo4j
This is part 1, in the next part, we’ll look at using import.io to scrape IMDB and the Academy Awards Database.
You can query the imported data in this instance (user/pass:oscars) of the brand new Neo4j Sandbox.

I came across the tweet from @LynnLangit about first step with mxnet, which I really liked.

lynn langit mxnet.jpg

So I wanted to do the same for Neo4j and was looking for a good dataset.

Then I realized that the 89th Academy Awards (Oscars) ceremony was the next day.
I was really looking forward to it, hoping it would come with some strong statements towards the current administration.
And then him rage tweeting about it on Monday morning.

But instead we got a fun Jimmy Kimmel performance and the well know Moonlight and La-La-Land mess-up by the (ex)-PWC people.

So I found the data and imported it and had this post ready to go.

But then got distracted trying to scrape IMDB with import.io and missed the date.

But as it is a nice dataset interestingly not as widely available as you’d think, I feel it’s still worth publishing.

So enjoy my struggles with data (quality).

Read more…


5 Tips & Tricks for Fast Batched Updates of Graph Structures with Neo4j and Cypher

Posted by Michael Hunger on Mar 2, 2017 in Uncategorized, cypher, import, neo4j

Michael Hunger, @mesirii

When you’re writing a lot of data to the graph from your application or library, you want to be efficent.

Inefficient Solutions

These approaches are not very efficient:

  • hard coding values instead of using parameters

  • sending a single query / tx per individual update

  • sending many single queries within a single tx with individual updates

  • generating large, complex statements (hundreds of lines) and sending one of them per tx and update

  • sending in HUGE (millions) of updates in a single tx, will cause out-of-memory issues

Read more…


The Reddit Meme Graph with Neo4j

Posted by Michael Hunger on Feb 25, 2017 in cypher, import

Saturday night after not enough drinks, I came across these tweets by @LeFloatingGhost.

memegraph tweet.jpg

This definitely looks like a meme graph.

We can do that too

memegraph meme.jpg

Read more…



User Defined Functions in Neo4j 3.1.0-M10

Posted by Michael Hunger on Oct 6, 2016 in apoc, cypher

Neo4j 3.1 brings some really neat improvements in Cypher alongside other cool features

I already demonstrated the – GraphQL inspired – map projections and pattern comprehensions in my last blog post.

User Defined Procedures

In the 3.0 release my personal favorite was user defined procedures which can be implemented using Neo4j’s Java API and called directly from Cypher.
You can tell, because I wrote about half of the 270 procedures for the APOC procedure collection “, with the remainder provided by other contributors.

Remember the syntax: …​ CALL namespace.procedure(arg1, arg2) YIELD col1, col2 AS alias …​

MATCH (from:Place {coords:{from}}), (to:Place {coords:{to}})

CALL apoc.algo.dijkstra(from, to, "ROAD", "cost") YIELD path, weight

RETURN nodes(path)
ORDER BY weight LIMIT 10;

Read more…


Neo4j 3.0 Stored Procedures

Posted by Michael Hunger on Feb 29, 2016 in cypher, java

One of the many exciting features of Neo4j 3.0 are “Stored Procedures” that, unlike the existing Neo4j-Server extensions are directly callable from Cypher.

At the time of this writing it is only possible to call them in a stand-alone statement with CALL package.procedure(params)
but the plan is to make them a fully integrated part of Cypher statements.
Either by making CALL a clause or by turning procedures into function-expressions (which would be my personal favorite).

Currently procedures can only be written in Java (or other JVM languages).
You might say, “WTF …​ Java”, but it is less tedious than it sounds.

First of all, the effort of setting up a procedure project, writing and building it is minimal.

To get up and running you first need a recent copy of Neo4j 3.0,
either the 3.0.0-M04 milestone or the latest build from the Alpha Site.

To get you started you also need a JDK and a build tool like Gradle or Maven.

You can effectively copy the procedure template example that Jake Hansson provided in neo4j-examples as a starting point.

But let me quickly walk you through an even simpler example (GitHub Repository).

You need to declare the org.neo4j:neo4j:3.0.0[-M04] dependency in the provided scope, to get the necessary annotations and the Neo4j API to talk to the database.

project.ext {
    neo4j_version = ""
dependencies {
	compile group: "org.neo4j", name:"neo4j", version:project.neo4j_version
	testCompile group: "org.neo4j", name:"neo4j-kernel", version:project.neo4j_version, classifier:"tests"
	testCompile group: "org.neo4j", name:"neo4j-io", version:project.neo4j_version, classifier:"tests"
	testCompile group: "junit", name:"junit", version:4.12

If you have a great idea on what kind of procedure you want to write, just open a file with a new class.

Please note that the only package and method names become the procedure name (but not the class name).

In our example we will create a very simple procedure that just computes the minimum and maximum degrees of a certain label.

The reference to Neo4j’s GraphDatabaseService instance is injected into your class into the field annotated with @Context.
As procedures are meant to be stateless, declaring non-injected non-static fields is not allowed.

In our case the procedure will be named stats.degree and called like CALL stats.degree('User').

package stats;

public class GraphStatistics {

    @Context private GraphDatabaseService db;

    // Result class
    public static class Degree {
        public String label;
        // note, that "int" values are not supported
        public long count, max, min = Long.MAX_VALUE;

        // method to consume a degree and compute min, max, count
        private void add(long degree) {
          if (degree < min) min = degree;
          if (degree > max) max = degree;
          count ++;

    public Stream<Degree> degree(String label) {
        // create holder class for results
        Degree degree = new Degree(label);
        // iterate over all nodes with label
        try (ResourceIterator it = db.findNodes(Label.label(label))) {
            while (it.hasNext()) {
               // submit degree to holder for consumption (i.e. max, min, count)
        // we only return a "Stream" of a single element in this case.
        return Stream.of(degree);

If you want to test the procedures quickly without spinning up an in-process server and connecting to it remotely (e.g. via the new binary bolt protocol as shown in the procedure-template), then you can use the test-facilities of Neo4j’s Java API.

Now we can test our new and shiny procedure by writing a small unit-test.

package stats;

class GraphStatisticsTest {
    @Test public void testDegree() {
        // given Alice knowing Bob and Charlie and Dan knowing no-one
        db.execute("CREATE (alice:User)-[:KNOWS]->(bob:User),(alice)-[:KNOWS]->(charlie:User),(dan:User)").close();

        // when retrieving the degree of the User label
        Result res = db.execute("CALL stats.degree('User')");

        // then we expect one result-row with min-degree 0 and max-degree 2
        Map<String,Object> row = res.next();
        assertEquals("User", row.get("label"));
        // Dan has no friends
        assertEquals(0, row.get("min"));
        // Alice knows 2 people
        assertEquals(2, row.get("max"));
        // We have 4 nodes in our graph
        assertEquals(4, row.get("count"));
        // only one result record was produced

Of course you can use procedures to create procedures, e.g. in other languages that are supported natively on the JVM like JavaScript via Nashorn, or Clojure, Groovy, Scala, Frege (Haskell), (J)Ruby or (J/P)ython.
I wrote one for creating and running procedures implemented in JavaScript.

There are many other cool things that you can do with procedures, see the resources below.

If you have ideas for procedures or wrote some of your own, please let us know.

Join our public Slack channel and visit #neo4j-procedures.



Using XRebel 2 with Neo4j

Posted by Michael Hunger on May 5, 2015 in neo4j

At Spring.IO in Barcelona I met my pal Oleg from ZeroTurnaround and we looked at how the new XRebel 2
integrates with Neo4j, especially with the remote access using the transactional Cypher http-endpoint.

As you probably know, Neo4j currently offers a remoting API based on HTTP requests (a new binary protocol is in development).

Our JDBC driver utilizes that http-based protocol to connect to the database and execute parameterized statements while adhering to the JDBC APIs.

XRebel is a lightweight Java Application Profiler which is loaded as java-agent and instruments your application.
It traces runtime for web requests and records your backend-application CPU usage, database- (JDBC) and http-requests to other services.
For web-applications it integrates automatically with the http-processing and injects profiling information into the response.

Movies Webapp

For this quick demo, we use the example Movies application which is available for many programming languages from our developer resources.
The application is just a plain Java webapp that serves three JSON endpoints to a simple Javascript frontend page.
The backend connects to Neo4j via JDBC to retrieve the requested information via our Cypher query language.

To prepare for running our app, just download, unzip and start Neo4j, open it on http://localhost:7474/ and run the :play movies statement in the Neo4j browser.
Then we can get and build the application and run it.
To test that it works, open the app in your browser at http://localhost:8080

git clone http://github.com/neo4j-contrib/developer-resources
cd developer-resources/language-guides/java/jdbc

mvn compile exec:java -DmainClass="org.neo4j.example.movies.Movies"

Setup with XRebel

To use XRebel we just download it, get an eval license and attach the jar as a java-agent to our application.

MAVEN_OPTS="-javaagent:$HOME/Downloads/xrebel/xrebel.jar" mvn compile exec:java -DmainClass="org.neo4j.example.movies.Movies"

If we check our example application page again, we see a small green XRebel icon in the left corner.
It provides access to the XRebel UI which has tabs for application performance, database queries, exceptions and more.

For our initial query for the “Matrix” movie, it shows both the request time for the web-application, as well as the database calls to Neo4j.
Interestingly both the JDBC level as well as the underlying http calls to Neo4j are displayed.

If we uglify our app, that our queries are executed incorrectly, simulating a n+1 select, then that shows clearly up in XRebel as massive database interaction.

Runtime Exceptions due to a programming error are also made immediately accessible from the XRebel UI.

For non-visual REST-services you can access the same profiling information via a special endpoint that is added to your application, in our case: http://localhost:8080/xrebel

As you can see, XRebel can give you quick insights in the performance profile of your Neo4j backed application and highlights which queries / pages / secondary requests
need further optimization.

Ping Oleg or me if you have more questions.

If you’re in London this week and want to have a relaxing election day,
make sure to grab a seat for GraphConnect on May 7, the Neo4j conference.
Ping me via email (michael at neo4j.org) for a steep discount a an avid reader of this blog.


How To: Neo4j Data Import – Minimal Example

Posted by Michael Hunger on Apr 18, 2015 in import, neo4j

We want to import data into Neo4j, there are too many resources with a lot of information which makes it confusing.
Here is the minimal thing you need to know.

Imagine the data coming from the export of a relational or legacy system, just plain CSV files without headers (this time).


Graph Model

Our graph Model would be very simple:

import data model.jpg
(p1:Person {userId:10, name:"Anne"})-[:KNOWS]->(p2:Person {userId:123,name:"John"})

Import with Neo4j Server & Cypher

  1. Download, install and start Neo4j Server.

  2. Open http://localhost:7474

  3. Run the following statements one by one:

I used http-urls here to run this as an interactive, live Graph Gist.

LOAD CSV FROM "https://gist.githubusercontent.com/jexp/d8f251a948f5df83473a/raw/people.csv" AS row
CREATE (:Person {userId: toInt(row[0]), name:row[1]});
LOAD CSV FROM "https://gist.githubusercontent.com/jexp/d8f251a948f5df83473a/raw/friendships.csv" AS row
MATCH (p1:Person {userId: toInt(row[0])}), (p2:Person {userId: toInt(row[1])})
CREATE (p1)-[:KNOWS]->(p2);
You can also use file-urls.
Best with absolute paths like file:/path/to/data.csv, on Windows use: file:c:/path/to/data.csv

If you want to find your people not only by id but also by name quickly, also run:

CREATE INDEX ON :Person(name);

For instance all second degree friends of “Anne” and on how many ways they can be reached.

MATCH (:Person {name:"Anne"})-[:KNOWS*2..2]-(p2)
RETURN p2.name, count(*) as freq

Bulk Data Import

For tens of millions up to billions of rows.

Shutdown the server first!!

Create two additional header files:


Execute from the terminal:

path/to/neo/bin/neo4j-import --into path/to/neo/data/graph.db  \
--nodes:Person people_header.csv,people.csv --relationships:KNOWS friendships_header.csv,friendships.csv

After starting your database again, run:



On Neo4j Indexes, Match & Merge

Posted by Michael Hunger on Apr 11, 2015 in cypher, neo4j

We at Neo4j do our fair share to cause confusion of our users. I’m talking about indexes my friends.
My trusted colleagues Nigel Small – Index Confusion and Stefan Armbruster – Indexing an Overview already did a great job explaining the indexing situation in Neo4j,
I want to add a few more aspects here.

Since the release of Neo4j 2.0 and the introduction of schema indexes, I have had to answer an increasing number of questions arising from confusion between the two types of index now available: schema indexes and legacy indexes.
For clarification, these are two completely different concepts and are not interchangable or compatible in any way.
It is important, therefore, to make sure you know which you are using.

— Nigel Small

Why do we indexes in a graph database at all, aren’t we all about graph navigation?
To quickly find starting points for your graph traversals or pattern matches.

Schema Indexes

Neo4j 2.0 introduced the optional schema which was built around the concept of node labels.
Labels can be used for path matching and – along with properties – are used as the basis for schema indexes and constraints.
Schema indexes can automatically speed up queries, unlike legacy indexes which you have to use excplicitely.

Note: Only schema indexes are aware of labels; legacy indexes are completely and utterly unaware of labels.

Further Note: Schema indexes are also only available for nodes whereas legacy indexes allowed relationships to be indexed as well.
The use cases for relationship indexing were few and could generally be worked around by introducing extra nodes.

— Nigel Small

Today Neo4j uses exact, case sensitive automatic schema indexes and constraints based on a single label and single property.
Labels and indexes are part of Neo4j’s “optional” schema concept.

Schema Indexes and MATCH

You create a schema index with CREATE INDEX ON :Label(property) e.g. CREATE INDEX ON :Person(name).

You list available schema indexes (and constraints) and their status (POPULATING,ONLINE,FAILED) with :schema in the browser or schema in the shell.
Always make sure that the indexes and constraints you want to use in your operations are ONLINE otherwise they won’t be used and your queries will be slow.

When you create a new schema index, it will also asynchronously index all existing nodes with that label and property combination.
After the index is available it will show as “ONLINE”. Later changes to nodes (label addition and removal and property updates) are automatically and transactionally reflected in the index as well.

You can await the index creation with schema await.

Fulltext, spatial and composite schema indexes are not available as of Neo4j 2.2.

You need to have an index or constraint to efficiently find starting nodes via MATCH, otherwise Neo4j has to run a full label scan with property comparisons to find your node which can be very expensive.

Schema indexes will be used for both inline syntax MATCH (p:Person {name:"Mark"}) as well as WHERE conditions MATCH (p:Person) WHERE p.name="Mark" and also IN predicates like MATCH (p:Person) WHERE p.name IN ["Mark","Max"].

Neo4j uses schema indexes automatically, yet you can force certain index usage with hints like USING INDEX p:Person(name) after a MATCH clause.

You can check that Neo4j actually uses an index, by prefixing your query with EXPLAIN and looking at the query plan visualization.
It should show a NodeIndexSeek instead of a NodeByLabelScan + Filter

Remember, that only a single property and only the label that you defined an index for is considered for lookups!

Constraints and MERGE

A MERGE operation will also use the index for faster lookups, but an index alone doesn’t guarantee uniqueness of nodes.

That’s where unique constraints come into play, again for a single label and single property.
You create them with this unwieldy syntax CREATE CONSTRAINT ON (n:Label) ASSERT n.property IS UNIQUE, e.g. CREATE CONSTRAINT ON (b:Book) ASSERT b.isbn IS UNIQUE

Constraints will, on creation check all nodes in the database with that label and property combination for uniqueness and will fail if there are duplicates and list them.
Constraint creation is blocking.
It will only return after the constraint has been either successfully created (“ONLINE”) or been aborted (“FAILED”).

Each constraint creates an accompanying index which is noted in the listing, the constraint creation will fail if the same index already existed.

Unique constraints will ensure that no other operation will create a node with a duplicate label+property-value combination by failing the transaction with an exception.
That goes for all operations like CREATE node, SET propert-value, ADD label and also for calls via the Java-API.

MERGE uses constraints for an efficient lookup and uniqueness check as well as acquiring a focused lock which guarantees uniqueness even in concurrent operations and across a cluster.

If you need to set other properties as part of your node-creation, use the ON CREATE SET option:
MERGE (b:Book {isbn:{isbn}}) ON CREATE SET p.title = {title}, p.year = {year}

Other types of constraints, e.g. property (type) or composite constraints are not available as of Neo4j 2.2

Composite Constraints (and Indexes)

If you really need composite-key constraints or index lookups consider concatenating the values into an artificial id-property or use an array with those composite values as “id”. For example:

CREATE CONSTRAINT ON (a:Address) assert a.composite_id IS UNIQUE;

MERGE (a:Address {composite_id: [{zip},{street},{number}]}) ON CREATE SET a.zip = {zip}, a.street={street}, a.number = {number};
// or
MERGE (a:Address {composite_id: {zip}+"_"+{street}+"_"+{number}}) ON CREATE SET a.zip = {zip}, a.street={street}, a.number = {number};

Manual (Legacy,Deprecated) Indexes

Prior to the release of Neo4j 2.0, legacy indexes were just called indexes. These were powered by Lucene outside the graph and allowed nodes and relationships to be indexed under a key:value pair. From the perspective of the REST interface, most things called “index” will still refer to these legacy indexes.

Note: Legacy indexes were generally used as pointers to start nodes for a query; they provided no automatic ability to speed up queries.

— Nigel Small

Historically there were manual indexes that you will come across in the documentation, old blog posts or examples.

If you don’t need a fulltext, spatial or relationship index or you don’t have to deal with a “legacy” Neo4j application, you can ignore them.
Stop reading here.

You had to add nodes and relationships (with key and value) to a named index yourself, that’s why they’re called “manual indexes”.

Those indexes were exact or fulltext Lucene indexes for nodes or relationships or spatial indexes for nodes.

The Lucene indexes also optionally exposed the lucene query syntax and had configurable cases and analyzers.

You could use manual indexes via the Java API or the START clause in Cypher, e.g.
START post=node:posts("title:Graphs") match (post)←[:WROTE]-(author) RETURN post,author

You manage them with the index command in the shell, you list them with index --indexes.

There is also a tab in the old Webadmin UI that lists them.

Legacy indexes also had options for unique node- and relationship-creation, which is now superceded by MERGE with CONSTRAINTs.

Deprecated Auto-Indexes

Because people didn’t like adding nodes and relationship manually but we didn’t have labels back then, there was a way of having “automatic” indexes.

You could configure exactly one automatic index for all nodes (node_auto_index) and one for all relationships (relationship_auto_index) by listing the properties that were to be indexed.

You could use them again with the Java API and the START clause but this time with the fixed _auto_index name (see above).

You still find the configuration options in the `neo4j.properties`config file, and the APIs both in Java as well as the REST endpoints.
All of those are safe to ignore, except if you know what you’re doing and you want to try to use an automatic spatial or fulltext index.
Be aware that that is a tricky business.

So which should I use?

If you are using Neo4j 2.0 or above and do not have to support legacy code from a pre-2.0 era, use only schema indexes and avoid legacy indexes.
Conversely, if you are stuck with an earlier version of Neo4j and are unable to upgrade, you only have one type of index available to you anyway.

If you need full text indexing, regardless of Neo4j version, you will need to use legacy indexes.

The more complicated scenarios are those that involve a period of transition from one type of index to another.
In these cases, make sure you are fully aware of the differences and try, wherever possible, to use either schema or legacy indexes but not both.
Mixing the two will often lead to more confusion.

— Nigel Small

And if in doubt, ask a question on the Neo4j mailing list or on StackOverflow.

Copyright © 2007-2017 Better Software Development All rights reserved.
Multi v1.4.5 a child of the Desk Mess Mirrored v1.4.6 theme from BuyNowShop.com.