About Michael Hunger

Posts by Michael Hunger:

 
1

LOAD CSV with SUCCESS

on Oct 18, 2014 in cypher, import, neo4j

I have to admit that using our LOAD CSV facility is trickier than you and I would expect.
Several people ran into issues that they could not solve on their own.

My first blog post on LOAD CSV is still valid in it own right, and contains important aspects that I won’t repeat here.
Both in terms of data quality checking (broken CSV files, misspelt header names or incorrect data types) as well as the concern of transaction size, where PERIODIC COMMIT comes to the rescue.

To address the most frequent issues and questions, I decided to write this follow up post.

In general you might have better experience using Neo4j-Enterprise as it contains some components which are more memory efficient.

If you want to import much more than 10-15 million lines of data, you might consider using our non-transactional batch-insertion facilities:

Stay tuned for some new announcements from Neo Technology about a super-fast batch-insertion mechanism.

Clean and Check your CSV-Files

We ran into many issues where CSV files were just broken, please make sure that your files are not, otherwise you will spend hours hunting for bugs in the wrong place.

The CSV reader used by Cypher (OpenCSV) will handle quotes and escaping correctly, that means if you have quotes in places where they not belong please escape and remove them.
Otherwise you might end up getting a million lines of CSV concatenated in a single string value, just because you had a quoted string in one place

Other bad things:

  • Having binary zeros in your file \u000, remove them (e.g. tr < file-with-nulls.csv -d '\000' > file-without-nulls.csv)

  • the 2-byte UTF file preamble (byte order mark, BOM) will trip it up, remove it

  • Escaped quotes instead normal quotes in your cells break your file, e.g "A title\", "An Author", unescape them

  • Quotes in the middle of the text will trip up the file structure, e.g. "I love :") this smiley", escape those

  • Make sure to have no unquoted text fields containing newlines

  • Windows Newlines can sometimes trip it up, when imported under non-Windows OS, make sure to clean it up first

  • if you use non-ascii characters (umlauts, accents etc.) make sure to use the appropriate locale or provide the System property -Dfile.encoding=UTF8

Some tools that can help you checking and fixing your CSV:

  • CSV Kit

  • CSV Lint

  • hexdump, and the hex-mode of editors like vi, emacs, UltraEdit and Notepad++

  • the tips on checking your CSV files from my last blog post

Data Conversion

When you convert data from the CSV to be imported via toInt, toFloat, split or otherwise, make sure to do it consistently in all places.
One tip that can help is to use WITH to declare them as identifiers once after conversion:

LOAD CSV ... AS data
WITH data, toInt(data.id) as id, extract(p IN split(data.parts,";") | toInt(p)) as partIds
CREATE (n:Node {id:id})
FOREACH (partId in partIds | CREATE (:Part {id:partId})-[:PART_OF]->(n) )

Partially addressed Issue: Eager Loading for Change Isolation

The biggest issue that people ran into, even when following the advice I gave earlier, was that for large imports of more than one million rows, Cypher ran into an out-of-memory situation.

That was not related to commit sizes, so it happened even with PERIODIC COMMIT of small batches.

The issue is that within a single Cypher statement you have to isolate changes that affect matches further on, e.g. when you CREATE nodes with a label that are suddenly matched by a later MATCH or MERGE operation.
Generating one row of results would affect other, subsequent ones in unexpected ways.

One example query that illustrates the behavior, that would happen if you don’t isolate, is:

MATCH (person:Person)
CREATE (clone:Person {name:"Clone of "+person.name});

If you don’t execute all the read’s before all the updates, you’ll end up creating clone of clone armies.
If you profile that query you see that there is an “Eager” step in the query plan.
That is where the “pull in all data” happens.

+-------------+------+--------+----------------+------------+
|    Operator | Rows | DbHits |    Identifiers |      Other |
+-------------+------+--------+----------------+------------+
| EmptyResult |    ? |      ? |                |            |
| UpdateGraph |    ? |      ? |          clone | CreateNode |
|     *Eager* |    ? |      ? |                |            |
| NodeByLabel |    ? |      ? | person, person |    :Person |
+-------------+------+--------+----------------+------------+

How does this affect LOAD CSV?

Cypher deals with this as follows: As soon as it detects a Update followed by Read (or the other way round) operation, it will execute the first operation for all rows first, before continuing on the second operation.
This happens by inserting an Eager Operator (that you can spot in the query plan), which will fetch all intermediate results from the previous step before continuing.

In normal queries where you create at most a few (hundred-) thousand nodes or relationships in one statements that’s not an issue.
But when you deal with a CSV file with millions of rows of input, it will both – fill your memory with the file contents and created data (and transaction state).
And as PERIODIC COMMIT is tied to the CSV-lines read at the end of the statement, it will also be effectively disabled.

This is not a problem if you have enough heap, I ran very complex LOAD CSV commands that had several Eager operators in their execution plan with a lot of CSV data on machines with enough heap (eg. 8G, 16GB or 32GB) and there it was no problem pulling all intermediate state into memory.
But you might not want to afford such a luxury.

Don’t worry, here are some simple tips on how to avoid it:

Some Tips

  • Upgrade to 2.1.5+ , Cypher has learned a number of constructs where it doesn’t have to put in an Eager operator between reads and writes because they are actually independent

  • Profile your statement upfront (even without pulling lines of input through WITH data LIMIT 0), if Eager shows up, simplify your statement

  • Write only simple LOAD CSV statements if you want to save memory and have multiple passes across the same or multiple csv files

    • only CREATE nodes or MERGE different type of nodes in one statement

    • don’t mix MERGE of nodes and MERGE of relationships

This “Eager” step also shows up in the following LOAD CSV statement in versions before 2.1.5

PROFILE LOAD CSV WITH HEADERS FROM "..." AS data
WITH data LIMIT 0 // limit 0 for profiling only
MATCH (p:Person {name:data.name})
MATCH (c:Company {name:data.company})
CREATE (p)-[:WORKD_AT]->(c)
Neo4j before 2.1.5
+----------------+------+--------+--------------+----------------------------------------+
|       Operator | Rows | DbHits |  Identifiers |                                  Other |
+----------------+------+--------+--------------+----------------------------------------+
|    EmptyResult |    0 |      0 |              |                                        |
|    UpdateGraph |    0 |      0 |   UNNAMED161 |                     CreateRelationship |
|       !! Eager |    0 |      0 |              |                     ! Watch this !     |
| SchemaIndex(0) |    0 |      0 |         c, c | Property(data,company); :Company(name) |
| SchemaIndex(1) |    0 |      0 |         p, p |  Property(data,name(0)); :Person(name) |
|          Slice |    0 |      0 |              |                           {  AUTOINT0} |
|        LoadCSV |    1 |      0 |         data |                                        |
+----------------+------+--------+--------------+----------------------------------------+

Fortunately, Cypher was fixed in 2.1.5 to learn that there are some patterns that are unrelated, so it doesn’t add the Eager step by default.
Here is the profiler output of the same query in 2.1.5, you see that the Eager operation is missing.

Neo4j 2.1.5+
+----------------+------+--------+--------------+----------------------------------------+
|       Operator | Rows | DbHits |  Identifiers |                                  Other |
+----------------+------+--------+--------------+----------------------------------------+
|    EmptyResult |    0 |      0 |              |                                        |
|    UpdateGraph |    0 |      0 |   UNNAMED179 |                     CreateRelationship |
| SchemaIndex(0) |    0 |      0 |         c, c | Property(data,company); :Company(name) |
| SchemaIndex(1) |    0 |      0 |         p, p |  Property(data,name(0)); :Person(name) |
|          Slice |    0 |      0 |              |                           {  AUTOINT0} |
|        LoadCSV |    1 |      0 |         data |                                        |
+----------------+------+--------+--------------+----------------------------------------+

There are some statements that are not yet covered, e.g. property updates like this:

LOAD CSV ... AS data
MATCH (n:Node {id:data.id})
SET n.value = data.value

Fixed Issue: Read your own Changes (Fixed in 2.1.5+)

Another issue that could slow down an import was an read your own writes problem (which has been recently fixed in 2.1.5) in Neo4j when using a statement like this.
That happened especially when you had schema indexes to speed up your node by label and value lookups.

CREATE INDEX ON :Person(name);
CREATE INDEX ON :Company(name);
...
MATCH (p:Person {name:"John"}),(c:Company {name:"ACME"})
CREATE (p)-[:WORKS_AT]->(c);

The reason for that issue was that the overlaying transaction state check for index lookups (i.e. potential node changes that affect index results, like added or removed labels and properties), also checked against nodes where other aspects were changed (e.g. relationships added).
That check also did not take labels into account.
So the more relationships you created the more nodes it had to scan to.
That’s why PERIODIC COMMIT with a small transaction size helped (100 or 1000).

Avoid Windows for Import

Due to a variety of reasons, disk and memory-mapping operations on Windows are much slower than on Linux and Mac.
This might not be so apparent in day-to-day operations with Neo4j but for imports where every millisecond counts, it quickly adds up and becomes a bottleneck.
So even if you just grab a live-boot-cd, an AWS or Digital-Ocean (better w/ SSD) instance or your friends Linux machine, you’ll be happier.

Use the Shell, Luke

The Neo4j-Shell is most helpful when importing data, as you can point it to different test-database directories (-path test.db), kill it with ctrl-c and run multiple of them in parallel (on different databases).
You can also supply a config file where you adapted the memory mapping sizes to fit your projected store sizes (-config conf/neo4j.properties).
And you can load commands from a file (-file import.cyp), no need for tedious copy & paste.

You find the neo4j-shell (or Neo4jShell.bat) script in your path/to/neo4j/bin and you can run it from anywhere.
If you have a server running and, don’t provide the -path parameter it will connect to the running server (if you didn’t disable remote shell).
For Windows users that installed the database via the graphical installer, my colleague Mark explained the steps to access the Neo4j-Shell.

There is only one caveat, if you run neo4j-shell without server, you have to provide it with more RAM for the import.

You can do that by setting an environment variable: EXPORT JAVA_OPTS="-Xmx4G -Xms4G -Xmn1G" for machines with more RAM you can increase that to 8 or 16 but not more that a quarter of your RAM.

For really large imports, you should use the remainder of your RAM for memory mapping, projecting the expected node, relationship and property-counts.
In the file you provide to the shell via -config conf/neo4j.properties:

# eg. for 25M nodes, 250M relationships, total 10.4G, with 4G heap and 2G OS of 16GB total
# 15 bytes per node
neostore.nodestore.db.mapped_memory=400M
# 35 bytes per rel
neostore.relationshipstore.db.mapped_memory=7G
# 42 bytes per property
neostore.propertystore.db.mapped_memory=2G
# long strings, chopped up into 60 char segments
neostore.propertystore.db.strings.mapped_memory=1G
# arrays if needed
#neostore.propertystore.db.arrays.mapped_memory=100M
export JAVA_OPTS="-Xmx4G -Xms4G -Xmn1G"
path/to/neo4j/bin/neo4j-shell -path import-test.db -config path/to/neo4j/conf/neo4j.properties -file import-test.cyp

Need Help? We’re there

If you have any questions regarding importing data into Neo4j, don’t worry, we can help you quickly:

 
1

Flexible Neo4j Batch Import with Groovy

on Oct 9, 2014 in import, neo4j

You might have data as CSV files to create nodes and relationships from in your Neo4j Graph Database.
It might be a lot of data, like many tens of million lines.
Too much for LOAD CSV to handle transactionally.

Usually you can just fire up my batch-importer and prepare node and relationship files that adhere to its input [...]

 
6

LOAD CSV into Neo4j quickly and successfully

on Jun 25, 2014 in cypher, import

Note

You can also read an interactive and live version of this blog post as a Neo4j GraphGist.

Since version 2.1 Neo4j provides out-of-the box support for CSV ingestion. The LOAD CSV command that was added to the Cypher Query language is a versatile and powerful ETL tool.
It allows you to ingest CSV data from any URL [...]

 
2

Rendering a Neo4j Database in UbiGraph

on Jun 23, 2014 in cypher, server

I never heard of UbiGraph before, but this tweet by @a61dr41n made me curious.

#ubigraph is an excellent #visualisation package for #neo4j and other #GraphDB can't believe I haven't heard of it sooner…
— a61dr41n (@a61dr41n) June 20, 2014

So I checked it out. UbiGraph is a graph rendering server that is controlled remotely and also interactively with [...]

 
0

Presentation: “Using AsciiArt to Analyse your SourceCode with Neo4j and OSS Tools” at GeekOut.ee 2014

on Jun 15, 2014 in conference, neo4j, programming languages

During the awesome GeekOut conference organized by my friends at ZeroTurnaround I was asked to stand in for Tim Fox who couldn’t come.

So instead of using a existing presentation I decided to finally write one up over night that covers one aspect of graph databases that is close to my heart:
Software Analytics with Graphs
When I [...]

 
0

Styling Neo4j Server Visualisation

on Jun 3, 2014 in neo4j, server

Styling Neo4j Server Visualisation

To give you a head start when using Neo4j-Browser I wanted to share these quick tips for styling and querying.

 
5

Using LOAD CSV to import Git History into Neo4j

on Jun 1, 2014 in cypher, neo4j

In this blog post, I want to show the power of LOAD CSV, which is much more than just a simple data ingestion clause for Neo4j’s Cypher.
I want to demonstrate how easy it is to use by importing a project’s git commit history into Neo4j. For demonstration purposes, I use Neo4j’s repository on GitHub, which [...]

 
1

Importing Forests into Neo4j

on Apr 10, 2014 in cypher, neo4j

Sometimes you don’t see the forest for the trees. But if you do, you probably use a graph database.

Trees are one of the simple graph datastructures, directed acyclic graphs (DAGs).

For our example we use a time-tree that we want to import into the database.

Data Volume

A quick soulver script (thanks Mark) later, we know how [...]

 
1

Sampling A Neo4j Database

on Mar 25, 2014 in cypher, neo4j

After reading the interesting blog post of my colleague Rik van Bruggen on “Media, Politics and Graphs” I thought it would be really cool to render it as a GrapGist. Especially, as he already shared all the queries as a GitHub Gist.

Unfortunately the dataset was a bit large for a sensible GraphGist representation, so I [...]

 
6

Quickly create a 100k Neo4j graph data model with Cypher only

on Mar 21, 2014 in cypher, neo4j

We want to run some test queries on an existing graph model but have no sample data at hand and also no input files (CSV,GraphML) that would provide it.

Why not create quickly it on our own just using cypher. First I thought about using Cypher to generate CSV files and loading them back, but it [...]

Copyright © 2007-2014 Better Software Development All rights reserved.
Multi v1.4.5 a child of the Desk Mess Mirrored v1.4.6 theme from BuyNowShop.com.