Distributed Web Using Interplanetary File System (IPFS)

Michael Whittle
October 17, 2020

Interplanetary File System (IPFS) is a really exciting technology. I learnt about it a couple of years ago when I was doing my Ethereum qualifications with B9Lab Academy.

There are a few articles about this already but they seem to focus mainly on the theoretical side. If you are like me and prefer to learn by getting stuck in and getting your hands dirty this article is probably for you.

So in summary what does IPFS solve:

  • An immutable store of world content — permanent tamper-proof storage.
  • De-duplication of data — content creates a hash and that hash is only stored once.
  • Decentralization of network traffic — content is retrieved from multiple peers instead of from a server, think “torrents”

Let’s look at a use case… you create your smart contract in the Blockchain which is supposed to be permanent and immutable but the interface to your DAPP sits on a centralised server somewhere. Kind of defeats the purpose? There are many awesome IPFS solutions and ideas here — notice this link is served in the IPFS :)

So let’s get cracking…

I’m going to fire up an Amazon EC2 Ubuntu instance for this. Nothing crazy just the minimum free-tier “t2.micro” will do.

  • Make sure you assign a public IP either by setting “Auto-assign Public IP” to “Enable” on Step 3 or by creating an Elastic IP and assigning it to your EC2 instance.
  • I want to include a bootstrap script to run straight after it has been provisioned to take care of the normal updates. Under “Advanced Details”, “User data” on Step 3 you can include this script.

#!/bin/bash
apt-get update -y
apt-get upgrade -y

  • On Step 6 you will be asked to configure a Security Group. Create one called “acl-ipfs-in” with the same description “acl-ipfs-in”. Leave the default SSH rule in to allow you to be able to manage your instance after it has been provisioned but add three additional rules.
  • TCP 8080 for the IPFS Gateway
  • TCP/UDP 4001 for IPFS Swarm
Image for post
Image for post

Just for interest sake I got the ports that need to be opened from here…

$ cat .ipfs/config"Addresses": {
   "API": "/ip4/127.0.0.1/tcp/5001",
   "Announce": [],
   "Gateway": "/ip4/127.0.0.1/tcp/8080",
   "NoAnnounce": [],
   "Swarm": [
     "/ip4/0.0.0.0/tcp/4001",
     "/ip6/::/tcp/4001",
     "/ip4/0.0.0.0/udp/4001/quic",
     "/ip6/::/udp/4001/quic"
   ]
 },

You should open TCP 8080 for the IPFS Gateway, TCP and UDP 4001 for the IPFS Spawn, but don’t open TCP 5001 for the WebUI as you will open up a security issue. That is for local access only!

  • As always make sure you create your instance with an SSH key that you own so you are able to access your instance after it has been provisioned.

Log into your instance with your SSH client…

% ssh -i SSH_PRIVATE_KEY.pem ubuntu@PUBLIC_IP.          # for Ubuntu% ssh -i SSH_PRIVATE_KEY.pem ec2-user@PUBLIC_IP.        # for Redhat

The bootstrap script should have completed the Aptitude “update” and “upgrade” but just to be sure…

$ sudo apt-get update -y
$ sudo apt-get upgrade -y

So the first step is we want to install IPFS and this is very simple. You will want to locate the most recent version of IPFS here. I’m going to use “go-ipfs_v0.7.0-rc2_linux-amd64.tar.gz” now.

$ curl https://dist.ipfs.io/go-ipfs/v0.7.0-rc2/go-ipfs_v0.7.0-rc2_linux-amd64.tar.gz | tar xvzf -
go-ipfs/ipfs
100 24.7M  100 24.7M    0     0  2268k      0  0:00:11  0:00:11 --:--:-- 3741k
go-ipfs/LICENSE
go-ipfs/LICENSE-APACHE
go-ipfs/LICENSE-MIT
go-ipfs/README.md$ cd go-ipfs
~/go-ipfs$ sudo ./install.sh
Moved ./ipfs to /usr/local/bin~/go-ipfs$ cd -
$ rm -Rf go-ipfs/*

So that’s the installation :)

Now to initialise it.

$ ipfs init
generating ED25519 keypair...done
peer identity: 12D3KooWSVWgaCwUoBAPopHod48kH3Drj4zqRZa7ojZ3WdMTv1wo
initializing IPFS node at /home/ubuntu/.ipfs
to get started, enter:ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/readme

This will create a “.ipfs” directory.

$ du -hs .ipfs
312K .ipfs$ tree .ipfs
.ipfs
├── blocks
│   ├── 25
│   │   └── CIQGA7TYHAL6QKGXDIUT647RE3QFNCUL4XJU2YO522YVTOLDCFD425I.data
│   ├── 6Y
│   │   └── CIQA4T3TD3BP3C2M3GXCGRCRTCCHV7XSGAZPZJOAOHLPOI6IQR3H6YQ.data
│   ├── 75
│   │   └── CIQBEM7N2AM5YRAMJY7WDI6TJ4MGYIWVBA7POWSBPYKENY5IKK2I75Y.data
│   ├── AI
│   │   └── CIQJ46AI7Z3GKSOAK7I4364OQFDZCLUQNLYFY6MG7D7FC6GYOCNCAIQ.data
│   ├── BE
│   │   └── CIQCXBHBZAHEHBHU6P7PEA72E7UZQRJALHH7OH2FCWSWMTU7DMWVBEA.data
│   ├── HB
│   │   └── CIQMDQRK7B5DSZKBYOX4353TGN5J3JXS5VS6YNSAEJBOXBG26R76HBY.data
│   ├── I2
│   │   └── CIQBZNLCBI3U2I5F7O636DRBO552SCMSK2X2WYVCQ6BMYJN4MJTRI2Q.data
│   ├── IL
│   │   └── CIQJFGRQHQ45VCQLM7AJNF2GF5UHUAGGHC6LLAH6VYDEKLQMD4QLILY.data
│   ├── IY
│   │   └── CIQB4655YD5GLBB7WWEUAHCO6QONU5ICBONAA5JEPBIOEIVZ5RXTIYY.data
│   ├── JN
│   │   └── CIQPHMHGQLLZXC32FQQW2YVM4KGFORVFJAQYY55VK3WJGLZ2MS4RJNQ.data
│   ├── JU
│   │   └── CIQKLQGD2NKPRWTXRCQI5BVPRVK2HMP6DIHVVWDEFQDIR2ZRIGSGJUQ.data
│   ├── KE
│   │   └── CIQD44K6LTXM6PHWK2RHB3G2VCYFPMVBTALE572GSMETJGBJTELFKEI.data
│   ├── LB
│   │   └── CIQPYNLTGPYZ5ICGWYR3JK3G4HYY6NUZNRU6DI53MXEHNSOQXXNCLBA.data
│   ├── MJ
│   │   └── CIQHQFRJK4MU2CVNFR3QG6KZB3FZG6OG7EBI4SUNB5K4S4T5UVECMJA.data
│   ├── N2
│   │   └── CIQDWKPBHXLJ3XVELRJZA2SYY7OGCSX6FRSIZS2VQQPVKOA2Z4VXN2I.data
│   ├── N6
│   │   └── CIQGFYPT5OBMRC7ZMUFC2R3ZQPKOGBMHJEDDFEVS5ALYBKIZCXPTN6Y.data
│   ├── OO
│   │   └── CIQBT4N7PS5IZ5IG2ZOUGKFK27IE33WKGJNDW2TY3LSBNQ34R6OVOOQ.data
│   ├── QD
│   │   └── CIQL4QZR6XGWMPEV5Q2FCTDFD7MF3G5OOC5CMEDUHNA5VXYZVDLFQDA.data
│   ├── QV
│   │   └── CIQOHMGEIKMPYHAUTL57JSEZN64SIJ5OIHSGJG4TJSSJLGI3PBJLQVI.data
│   ├── R3
│   │   └── CIQBED3K6YA5I3QQWLJOCHWXDRK5EXZQILBCKAPEDUJENZ5B5HJ5R3A.data
│   ├── RO
│   │   └── CIQDRD2UT66U4EATJW53PSVWMFFPGNAN42PVWMDLHJD6FA5EVNNZROI.data
│   ├── SHARDING
│   ├── TP
│   │   └── CIQCODPXR5G237BYM7E5JF4A624CLH2TQDLC4QI6HEZK7FUWZQESTPI.data
│   ├── U2
│   │   └── CIQHFTCY7XL57YWLVDQ6UAXUOND3ADYQYJKYXA6G7A5IMD7SMO22U2A.data
│   ├── UC
│   │   └── CIQFKVEG2CPWTPRG5KNRUAWMOABRSTYUFHFK3QF6KN3M67G5E3ILUCY.data
│   ├── V3
│   │   └── CIQAPZYJAKUKALYI4YTB5PUMEN5BZYZHUQZWGFL4Q3HZUV26SYX2V3Q.data
│   ├── VN
│   │   └── CIQPEOA2TS3RMLOBOF55ZOEZE3TNBQG3HCNFOYC3BATAIJBOIE5FVNY.data
│   ├── X3
│   │   └── CIQFTFEEHEDF6KLBT32BFAGLXEZL4UWFNWM4LFTLMXQBCERZ6CMLX3Y.data
│   ├── XV
│   │   └── CIQGAS6MQJCEC37C2IIH5ZFYJCSTT7TCKJP3F7SLGNVSDVZSMACCXVA.data
│   ├── _README
│   └── diskUsage.cache
├── config
├── datastore
│   ├── 000002.ldb
│   ├── 000003.log
│   ├── CURRENT
│   ├── CURRENT.bak
│   ├── LOCK
│   ├── LOG
│   └── MANIFEST-000004
├── datastore_spec
├── keystore
└── version31 directories, 41 files

As the instructions recommend you are able to view the “readme” file stored in the local IPFS store.

$ ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/readme
Hello and Welcome to IPFS!██╗██████╗ ███████╗███████╗
██║██╔══██╗██╔════╝██╔════╝
██║██████╔╝█████╗  ███████╗
██║██╔═══╝ ██╔══╝  ╚════██║
██║██║     ██║     ███████║
╚═╝╚═╝     ╚═╝     ╚══════╝If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!-------------------------------------------------------
| Warning:                                              |
|   This is alpha software. Use at your own discretion! |
|   Much is missing or lacking polish. There are bugs.  |
|   Not yet secure. Read the security notes for more.   |
-------------------------------------------------------Check out some of the other files in this directory:./about
 ./help
 ./quick-start     <-- usage examples
 ./readme          <-- this file
 ./security-notes

If you want to see what else is stored using your IPFS instance you can run this.

$ ipfs ls /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc
QmQy6xmJhrcC5QLboAcGFcAE1tC8CrwDVkrHdEYJkLscrQ 1681 about
QmYCvbfNbCwFR45HiNP45rwJgvatpiW38D961L5qAhUM5Y 189  contact
QmU5k7ter3RdjZXu3sHghsga1UQtrztnQxmTL22nPnsu3g 311  help
QmejvEPop4D7YUadeGqYWmZxHhLc4JBUCzJJHWMzdcMe2y 4    ping
QmQGiYLVAdSHJQKYFRTJZMG4BXBHqKperaZtyKGmCRLmsF 1681 quick-start
QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB 1091 readme
QmQ5vhrL7uv6tuoN9KeVBwd4PwfQkXdVVmDLUZuTNxqgvm 1162 security-notes

In order to be able to make requests to the IPFS daemon from a browser we’ll need to add some HTTP headers for CORS.

$ ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'$ ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST"]'

Let’s start the IPFS daemon…

$ ipfs daemon
Initializing daemon...
go-ipfs version: 0.7.0-rc2
Repo version: 10
System version: amd64/linux
Golang version: go1.14.4
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/127.0.0.1/udp/4001/quic
Swarm listening on /ip4/172.16.0.70/tcp/4001
Swarm listening on /ip4/172.16.0.70/udp/4001/quic
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /ip6/::1/udp/4001/quic
Swarm listening on /p2p-circuit
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip4/127.0.0.1/udp/4001/quic
Swarm announcing /ip4/172.16.0.70/tcp/4001
Swarm announcing /ip4/172.16.0.70/udp/4001/quic
Swarm announcing /ip4/52.215.221.191/tcp/4001
Swarm announcing /ip6/::1/tcp/4001
Swarm announcing /ip6/::1/udp/4001/quic
API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready

Please note that the WebUI is locked down to http://127.0.0.1:5001/webui. Do not open up TCP 5001 to the outside world as it is a security risk. Trying to open up TCP 5001 will not work without some nasty SSH redirect tricks but you really should not do it.

Now leave the IPFS daemon running in a dedicated terminal window and open up a new terminal session to the same instance.

I’m going to create a basic “index.html” file for a demonstration and please do the same.

<!DOCTYPE html>
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>

Now we add our “index.html” to IPFS.

$ ipfs add index.html
added QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3 index.html
90 B / ? [------------

Notice a hash was created for this file. This hash will be exactly the same for anyone who adds the exact file to IPFS.

$ ipfs add index.html
added QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3 index.html
90 B / 90 B [==============================================================================================================================================] 100.00$ ipfs add index.html
added QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3 index.html
90 B / 90 B [==============================================================================================================================================] 100.00%

The hash is always the same!

So the file has been stored but how do we retrieve it. Well we have many options.

$ ipfs cat QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>$ curl http://127.0.0.1:8080/ipfs/QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>$ ipfs object get QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
{"Links":[],"Data":"\u0008\u0002\u0012Z\u003chtml\u003e\n\u003chead\u003e\n  \u003ctitle\u003eIPFS Example\u003c/title\u003e\n\u003c/head\u003e\n\u003cbody\u003e\n  IPFS Example\n\u003c/body\u003e\n\u003c/html\u003e\n\u0018Z"}# There is a really nice tool called 'jq' to JSON in a more
# user friendly way. You can install it like this:
# sudo apt-get install jq -y$ ipfs object get QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3 | jq .
{
 "Links": [],
 "Data":
"\b\u0002\u0012Z<html>\n<head>\n  <title>IPFS Example</title>\n</head>\n<body>\n  IPFS Example\n</body>\n</html>\n\u0018Z"
}

Adding individual files is fine but for multiple files you really should use some structure by wrapping them in a directory. This is done with the “-w” argument. It’s also worth adding the “-q” argument to not include the file transfer output.

$ ipfs add -wq index.html
QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU

And to retrieve it…

$ curl https://ipfs.io/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU/index.html
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>

A couple of points to highlight here…

  • The first hash is the hash of the file and the second hash is the hash of the directory
  • You will notice I’m not retrieving my file locally but from the IPFS swarm. We will look into this in more further down.

$ ipfs cat QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU/index.html
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>$ curl http://127.0.0.1:8080/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU/index.html
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>$ ipfs object get QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU/index.html

If you are familiar with bash on Linux you can make your life a little easier by storing the hash in a variable.

$ DHASH=$(ipfs add -wq index.html | tail -n 1)

And then you can use it like this…

$ curl https://ipfs.io/ipfs/$DHASH/index.html
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>

Another useful tip is that you can also add files recursively to IPFS. I created a directory called “import” and added three identical files called “index1.html”, “index2.html” and “index3.html”.

~/import$ ipfs add -rq .
QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3
QmWmKpjswxD3EUvnr13huCDYkjgRcj4mZtKh7GvGoR89NT

Like before the first three hashes are the files and are identical as the file contents are the same and the last hash is the directory so we can retrieve it like this.

$ ipfs cat QmWmKpjswxD3EUvnr13huCDYkjgRcj4mZtKh7GvGoR89NT/index1.html
<html>
<head>
 <title>IPFS Example</title>
</head>
<body>
 IPFS Example
</body>
</html>

If you want to PUT an object in IPFS you can do it like this, and the retrieval is the same as normal.

$ echo '{"data":"IPFS Example"}' | ipfs object put
added QmSBeT1cjLkgoaaG1qMF7Hpq1RpzTpW1cM4Vii95qtiaPi

In order to interact with an Ethereum smart contract you need two things. The Ethereum address of the smart contract and the ABI which is the interface. A common use case is to store the ABI (which is JSON) for the smart contract in IPFS as above.

If you want to view what you have stored recursively you can run this.

$ ipfs pin ls | grep recursive
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
QmWmKpjswxD3EUvnr13huCDYkjgRcj4mZtKh7GvGoR89NT recursive
QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc recursive
QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU recursive
QmZHBMxeAJpYSFgQ3a4Ne3LXFCLmoegKeGmDxBTnmhsE3H recursive
QmacpuLxTkUyvYwr53Q6SFMruovyLK3Ku2BJB1Ugur4oc3 recursive

You can see all the hashes we have created in this tutorial.

You may be under the impression all we have been doing so far is just local, but you would be wrong. There should have been some clue when we were able to retrieve our file from “ipfs.io”. But how?

$ ipfs swarm peers
/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
/ip4/107.191.57.64/udp/4001/quic/p2p/QmfASb7hpz7oqNtiojbBzMo7rcbA4b3DAGTnovcKMrBtkq
/ip4/108.20.79.252/udp/53744/quic/p2p/12D3KooWLa8W4orcMn3BfyAmKQcqr7tPRFr1oURfGgCjAGyqANUL
/ip4/109.121.164.21/tcp/42707/p2p/QmYPs1a5AaYvgiStnaaFJdGmLrTPUTbFHZ74kVaKuUSwNR
/ip4/111.229.255.219/tcp/4001/p2p/QmS5eDWVDpRdPHh2vyPKrfGvdVrEhx4cEiezPmej7uAgBw
/ip4/116.202.229.43/udp/39493/quic/p2p/12D3KooWSJ6YxCUjMpjFn6qDYfkam9N55uFhsyguM9UouzVE9hmG
*** LONG LIST ***

We can inspect the first one for example…

$ ipfs id QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
{
"ID": "QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"PublicKey": "CAASpgQwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCh9cDnwNXlVq/A6EVm+MVldzrbVI3cIZypaIYToAlsLf0GmATISWhUW5yd8Z3RMcyECLd4Hffd+vIIpCqCFSPOA5VRZKYtyra9EN0m+FB1F1Z8oSjwCgVthja5VJ3bWcpydih3XJC9kdYlGtvf02v2ignDv+aeGxWH6PMaS1WvyAlee29mgxZfnA7wrRsi2Lc3Se4CqkZWbNX3qf9usQmf42s2Or1OEpMQim1HOjSed6yhXkmyD/5htCIus6Y06Egdcaf9zuqIogRPpc7d4d7jFOJ4gLxxPKV4gUaE6F4NIc/0DiPDQfE+4aBkUvKEZkmZhilz5R1pK1eM2bfeideGrWuuvPjfw0PbjtpDShWSlZGRfFK/FnQTWRSdDnCSvJGZKPHVsly0iw+Qp6BbDrKa3KmT+JPG+xN6U6XEcKijCbV0u0/YCHm959zCN+ryzpoXuRkwMt+ZyL9VGYdWHuJkoJcw+QKWEFcWJeDQ4eKn+QRppqSA7QjPm0w68FZ7/pq/RwB52Mx9fyLvyDWY+GyeBnjK954imamcR8jQV+fzuK9AGFyN1JmhwWfDWNerg69lgZRM4Li2vSz+S/gMjJ5/Yf6UgW33nhKuXoLFiPiUuG/VmdpZEvh1TeKiPy0VKYRaVXCnLY2FNzJbld08adnKMLgYbCAXDRCVW32iFoIscwIDAQAB",
"Addresses": [
 "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
 "/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
 "/ip4/127.0.0.1/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
 "/ip4/127.0.0.1/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
],
"AgentVersion": "go-ipfs/0.7.0-rc1/901c58b",
"ProtocolVersion": "ipfs/0.1.0",
"Protocols": [
 "/ipfs/bitswap",
 "/ipfs/bitswap/1.0.0",
 "/ipfs/bitswap/1.1.0",
 "/ipfs/bitswap/1.2.0",
 "/ipfs/id/1.0.0",
 "/ipfs/id/push/1.0.0",
 "/ipfs/kad/1.0.0",
 "/ipfs/lan/kad/1.0.0",
 "/ipfs/ping/1.0.0",
 "/libp2p/autonat/1.0.0",
 "/libp2p/circuit/relay/0.1.0",
 "/p2p/id/delta/1.0.0",
 "/x/"
]
}

We can also inspect the statistics of our IPFS.

$ ipfs stats bitswap
bitswap status
provides buffer: 0 / 256
blocks received: 0
blocks sent: 0
data received: 0
data sent: 0
dup blocks received: 0
dup data received: 0
wantlist [0 keys]
partners [58]$ ipfs stats bw
Bandwidth
TotalIn: 3.0 MB
TotalOut: 590 kB
RateIn: 1.1 kB/s
RateOut: 31 B/s$ ipfs stats repo
NumObjects: 42
RepoSize: 287991
StorageMax: 10000000000
RepoPath: /home/ubuntu/.ipfs
Version: fs-repo@10

So we have confirmed we can access the link our web browser: “https://ipfs.io/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU/index.html

This is great and all but working with hashes is a little painful for users. Thankfully there are some pretty nice DNS features to help us.

We want to publish our hash. Please not this can take up to 1 minute!

$ ipfs name publish QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU
Published to k51qzi5uqu5dmcw6bb5u62ytlbq6z23xmq4csgjw4z2ejenzanjysyr7z97wpe: /ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU

And let’s test it…

$ ipfs name resolve k51qzi5uqu5dmcw6bb5u62ytlbq6z23xmq4csgjw4z2ejenzanjysyr7z97wpe
/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU

What we want to do is locate a domain that we own or have control of and add a special TXT entry called “_dnslink” with the value: dnslink=/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU

Replace the hash with your hash.

Let’s query the domain for the TXT entry to see if it exists (which it does).

$ dig +noall +answer TXT _dnslink.sytelreply.com
_dnslink.sytelreply.com. 194 IN TXT "dnslink=/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU"

Once DNS has propagated you should be able to open your “index.html” like this…

https://ipfs.io/ipns/YOUR_FQDN/index.html# in my case
https://ipfs.io/ipns/sytelreply.com/index.html

If you would like to learn more about how this works you can read up about it here.

So we have gone from this…

Image for post
Image for post

To this…

Image for post
Image for post

That’s a big improvement but it can be even better.

You can either leave the TXT record in place or replace it. It is up to you. I’m going to add this in addition.

  • Create yourself a CNAME for “ipfs” (or what ever you want to call it) to “gateway.ipfs.io.
  • Create yourself a TXT record for “_dnslink.ipfs” (“ipfs” would be what ever you selected in the point above) and the value would be “dnslink=/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU”. As with the previous example the hash is the directory wrapper hash for “index.html” we created earlier. Just a note that “_dnslink.ipfs” will be expanded to “_dnslink.ipfs.sytelreply.com”.

We can confirm this with “dig”.

$ dig cname ipfs.sytelreply.com; <<>> DiG 9.16.1-Ubuntu <<>> cname ipfs.sytelreply.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41884
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;ipfs.sytelreply.com.  IN CNAME;; ANSWER SECTION:
ipfs.sytelreply.com. 300 IN CNAME gateway.ipfs.io.;; Query time: 87 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Sun Oct 11 21:16:36 UTC 2020
;; MSG SIZE  rcvd: 77$ dig +noall +answer TXT _dnslink.ipfs.sytelreply.com
_dnslink.ipfs.sytelreply.com. 300 IN TXT "dnslink=/ipfs/QmXozk8es7YoL3UU2v1uMBwsiQwaSTSbw3a7tZkA1yRNEU"

So a request to http://ipfs.sytelreply.com/index.html” will be directed to “gateway.ipfs.io” which will query our zone file for the TXT record and serve our file.

Image for post
Image for post

For a complete list of IPFS commands you can find it there:
https://docs.ipfs.io/reference/cli/#ipfs

If you enjoyed reading this article and would like me to write on any other topics please let me know in the comments or email me directly.

I’m the Head of the Networks Practice at Net Reply. My team specialises in networks, security, and process automation including self-service dashboards. If you would like more information on this please contact me on m.whittle@reply.com. Alternatively, you can learn more about us on LinkedIn and Twitter.