Harness the Power of Distributed Storage IPFS and Swarm in Ethereum Blockchain Applications
Ethereum is a general-purpose blockchain that is more suited to describing business logic, through advanced scripts, also known as smart contracts. Ethereum was designed with a broader vision, as a decentralized or world computer that attempts to marry the power of the blockchain, as a trust machine, with a Turing-complete contract engine. Although Ethereum borrows many ideas that were initially introduced by bitcoin, there are many divergences between the two.
The Ethereum virtual machine and smart contracts are key elements of Ethereum, and constitute its main attraction. In Ethereum, smart contracts represent a piece of code written in a high-level language (Solidity, LLL, Viper) and stored as bytecode in the blockchain, in order to run reliably in a stack-based virtual machine (Ethereum Virtual Machine), in each node, once invoked. The interactions with smart contract functions happen through transactions on the blockchain network, with their payloads being executed in the Ethereum virtual machine, and the shared blockchain state being updated accordingly.
If you are not familiar with blockchain technology reading History and Evolution of Blockchain Technology from Bitcoin article is highly recommended.
If you like to learn more Ethereum blockchain development (with Solidity programming), Blockchain Developer Guide- Introduction to Ethereum Blockchain Development with DApps and Ethereum VM, Building Ethereum Financial Applications with Java and Web3j API through Blockchain Oracles, and Building Enterprise Blockchain-as-a-Service Applications Using Ethereum and Quorum are highly recommended.
In our discussions of Ethereum so far, it should have become clear that using the Ethereum blockchain to store large amounts of data is neither cost-effective, nor what it was designed for. Storing information in the blockchain means storing it as state data, and doing so requires payment in the form of gas.
In this recipe, we will look at ways that data can be stored in a distributed manner by way of two different platforms: IPFS and Swarm. For introducing both platforms, we will create a small project to help us become familiar with how IPFS can be used programmatically in a website frontend.
Background
The Ethereum Virtual Machine (EVM) operates on words that are 256 bits, or 32 bytes, in size. Each 256-bit word of data costs 20,000 gas units to store, equating to 640,000 gas units per kilobyte. At the time of writing, the gas price is around 4.5 Gwei (0.0000000045 ETH), making a kilobyte of data cost 0.00288 ETH.
Scaling this up gives a cost of 2,880 ETH per GB of data, with the current price of $220 per ETH giving each GB a price tag of $621,000. This is an extremely high cost as compared to conventional centralized cloud storage, where the costs per GB are usually in the region of cents.
If we can’t store large amounts of data in the blockchain itself, then the logical alternative would be to store it in a centralized storage layer, while making it available to the data layer located on a blockchain. An example of this would be a DApp that uses the blockchain as its decentralized data layer backend, but with the storage layer and frontend hosted on a conventional, centralized server.
The following diagram shows how this is usually achieved via a decentralized blockchain backend with a centralized frontend and storage
For some, this is an acceptable compromise between a decentralized data layer and a centralized storage layer, the latter being a necessary evil to get around the costs of storing data in the blockchain itself. It is, however, possible to do better, and make the storage layer decentralized, as well, without needing to use a blockchain directly.
In early versions of the Web3 technology stack, decentralized storage was provided by Ethereum’s Swarm platform, which was designed to store and distribute a DApp’s frontend code, as well as general blockchain data:
The Web3 technology stack
Another service, called InterPlanetary File System (IPFS), provides an alternative to Swarm. The two platforms are similar, and to some extent, they can be used interchangeably, as will be discussed in more detail in subsequent sections.
Swarm and IPFS
Before looking at how we can make use of each of the two alternatives for decentralized storage in detail, we’ll first briefly look at their main similarities and differences.
The aim of each project is to provide both a general decentralized storage layer and a content delivery protocol. To do so, both technologies use peer-to-peer networks composed of client nodes, which are able to store and retrieve content. The files that are stored on each of the platforms are addressed by the hashes of their content.
A result of being able to store files is that both IPFS and Swarm are able to store and serve the HTML, CSS, and JavaScript of applications built on top of them, and can therefore take the place of traditional server backends.
For files that are too large to be stored whole, both projects offer a model whereby larger files can be served in chunks, much the same as in the BitTorrent protocol. One of the main issues with the BitTorrent protocol is that users are not incentivized to host, or seed, content, creating a one-sided system in which many downloaders feed from a few hosts.
To mitigate similar issues, IPFS and Swarm are able to incentivize users to run clients by way of monetary rewards. For Swarm, the incentives are built in, as Swarm must be run in conjunction with an Ethereum Geth client.
For IPFS, a separate incentive layer must be applied, in the form of Filecoin (see http:// filecoin.io).
Although the two platforms are similar in many ways, there are also differences. Firstly—and perhaps most importantly, from the perspective of a developer—the IPFS project is more mature, and has a higher level of adoption, despite Swarm’s more integral position in the Ethereum ecosystem.
Further differences mainly involve the technologies from which each platform is built. For example, Swarm uses a content-addressed chunkstore, rather than the distributed hash table (DHT) used by IPFS. A further example is that, due to its close association with the rest of the Ethereum stack, Swarm is able to make use of Ethereum’s DevP2P protocol (https://github.com/ethereum/wiki/wiki/%C3%90%CE%9EVp2p-Wire-Protocol), whereas IPFS uses the more generic libp2p network layer (https://github.com/libp2p).
Installing IPFS
We will start by installing IPFS locally on our machine, which will give us the tools required to upload and view content on the IPFS network. Installation processes will vary depending on your machine’s architecture—full instructions can be found at https:// ipfs.io/docs/install/.
Once IPFS has been installed, we can initialize our node as follows:
ipfs init
Once this has completed correctly, the following output will displayed:
initializing ipfs node at /Users/jbenet/.go-ipfs generating 2048-bit RSA keypair…done
peer identity: Qmcpo2iLBikrdf1d6QU6vXuNb6P7hwrbNPW9kLAH8eG67z to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
Run the suggested ipfs cat command to read a welcome file:
Hello and Welcome to IPFS!
██╗██████╗ ███████╗███████╗
██║██╔══██╗██╔════╝██╔════╝
██║██████╔╝█████╗ ███████╗
██║██╔═══╝ ██╔══╝ ╚════██║
██║██║ ██║ ███████║
╚═╝╚═╝ ╚═╝ ╚══════╝
If you’re seeing this, you have successfully installed IPFS and are now interfacing with the ipfs merkledag!
——————————————————-
| Warning: |
| This is alpha software. Use at your own discretion! |
| Much is missing or lacking polish. There are bugs. |
| Not yet secure. Read the security notes for more. |
——————————————————-
Check out some of the other files in this directory:
./about
./help
./quick-start <– usage examples
./readme <– this file
./security-notes
We’ve now initialized our node, but we haven’t yet connected to the network. To do so, we can run the following command:
ipfs daemon
To check that we are connected correctly, we can view the nodes in the network that we’re directly connected to, as follows:
ipfs swarm peers
Having connected to the network, it will now be possible to access the documents stored and distributed by it. To test whether we can do this, we’ll use a test file that already resides on the network.
In this case, it’s an image of a cat, and, because it’s an image file, we’ll first need to direct the data to a new file for viewing:
ipfs cat /ipfs/QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ/cat.jpg
>cat.jpg
This is perhaps a confusing example: we’re using the ipfs cat command to show an object stored in IPFS, where the object itself is a picture of a cat, named cat.jpg.
As well as accessing the file directly, the file can also be viewed by using any of the following options:
- A local browser-based userinterface (http://localhost:5001/webui):
- The local IPFS gateway that is run by our client, on port 8080:
curl “http://127.0.0.1:8080/ipfs/QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ/ cat.jpg” > local_gateway_cat.jpg
- A remote public IPFS gateway URL:
curl “https://ipfs.io/ipfs/QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ/cat.jp g” > public_gateway_cat.jpg
Now that our local client is running correctly, and we are able to interact with the network, we can move on to adding files and directories.
The simplest way to add a single file is to use the following command, passing it the path to the file that you want to upload:
ipfs add <file>
Running this command returns a hash, which is used as the file’s IPFS address. This address is based on the contents of the file, and can be used to access the file, as shown previously.
We’ll return to adding files later, when we add the files associated with our ICO website to the network.
Installing Swarm
Installing Swarm is slightly more involved than IPFS, and it depends on a running Geth client. If you haven’t already done so, ensure that Geth is installed by using the appropriate instructions for your OS (see https://github.com/ethereum/go-ethereum/wiki/ Installing-Geth).
Once this is complete, install Swarm, also using the appropriate instructions for your OS, from https://swarm-guide.readthedocs.io/en/latest/installation.html.
Once it has been installed, we can check for a successful installation by using the following command:
$ swarm version Swarm
Version: 0.3.2-stable
Git Commit: 316fc7ecfc10d06603f1358c1f4c1020ec36dd2a Go Version: go1.10.1
OS: linux
Before we run our local Swarm client, if Geth doesn’t already have an account configured, we should create one by using the following command:
geth account new
This account can then be used to connect to the Swarm network, as follows:
swarm –bzzaccount <your_account_address>
Once the client is running, we can upload a test file from another Terminal. Here, we use the picture of the cat that we downloaded from IPFS:
$ swarm up cat.jpg 5f94304f82dacf595ff51ea0270b8f8ecd593ff2230c587d129717ec9bcbf920
The returned hash is the hash of the Swarm manifest associated with the file. This is a JSON file containing the cat.jpg file as its only entry. When we uploaded the cat.jpg file, the manifest was uploaded with it, allowing for the main file to be returned with the correct MIME type.
To check that the file has uploaded, we can use one of several options. First, we can access it through our local Swarm client’s HTTP API on port 8500, using a web page that it serves at http://localhost:8500. From here, entering the hash returned by Swarm will display the picture of our cat.
The second option is to access the file through Swarm’s public gateway, at http://swarm-gateways.net/bzz:/<file_hash>/ (substituting your own hash at the end).
Hosting our frontend
In Recipe 8, Creating an ICO, we created all of the necessary components of an ICO. Our backend consisted of the ERC-20 token contract, together with the token sale contract, both of which were deployed to the Rinkeby network.
We also created a frontend in the form of a standard HTML, CSS, and JavaScript website, hosted on a local web server. Our frontend interacted with the blockchain backend by way of the Web3 JavaScript library and the MetaMask browser add-on.
Now, we want to remove our reliance on the centralized frontend, and replace it with a frontend served by either IPFS or Swarm.
Serving your frontend using IFPS
Let’s start by deploying our frontend to IPFS. First, we need to decide which files are required by the frontend, and add them to a directory that we can add to IPFS. The following files will be required:
- The frontend code that we created in the src/ directory
- The contract abstractions, in the form of JSON files, in the build/contract/directory
From the root of our ICO project, add the contents of these directories to a new directory that we’ll use for the deployment to IPFS:
$ mkdir dist
$ rsync -r src/ dist/
$ rsync -r build/contract/ dist/
Then, add the new directory to IPFS by using the add command that we met earlier, passing it the recursive -r flag, as follows:
$ ipfs add -r dist/
added QmXnB22ZzXCw2g5AA1EnNT1hTxexjrYwSzDjJgg8iYnubQ dist/Migrations.json added Qmbd5sTquZu4hXEvU3pQSUUsspL1vxoEZqiEkaJo5FG7sx dist/PacktToken.json added QmeBQWLkTZDw84RdQtTtAG9mzyq39zZ8FvmsfAKt9tsruC dist/PacktTokenSale.json
added QmNi3yyX7gGTCb68C37JykG6hkCvqxMjXcizY1fb1JsXv9 dist/SafeMath.json added Qmbz7fF7h4QaRFabyPUc7ty2MN5RGg9gJkodq2b8UydkYP dist/index.html added QmfKUdy8NGaYxsw5Tn641XGJcbJ1jQyRsfvkDKej56kcXy dist/js/app.js added Qmc3xyRTJ2wNt2Ep5BFvGnSRyQcjk5FhYXkVfvobG26XAm dist/js
added QmQytFqNoyk8H1ZEaHe9wkGcbcUgaRbReEzurPBYRnbjNB dist
With the files required for our site uploaded, we can access our fully decentralized ICO web page by taking the hash corresponding to the parent directory and viewing it via a public IPFS URL, as follows: https://ipfs.io/ipfs/QmQytFqNoyk8H1ZEaHe9wkGcbcUgaRbReEzurPBYRnbjNB/.
This will bring up a fully decentralized version of our ICO website, which can be interacted with in the usual way, using the MetaMask browser add-on.
Note that the page may take longer to load than a centrally served frontend: this decrease in performance should perhaps be considered the price paid to make our site truly decentralized and censorship-resistant.
Using IPNS
Whenever we need to update any of our ICO frontend files, the associated IPFS file hashes will change, meaning that the IPFS path that we first generated for our project will no longer equate to the most recent version of the files. This will cause the public URL to change whenever file changes are made.
To solve this problem, we can use IPNS (short for InterPlanetary Naming System), which allows us to publish a link that won’t change when the underlying files are changed.
Let’s publish the first iteration of the site to IPNS, using the hash of the main directory:
ipfs name publish QmQytFqNoyk8H1ZEaHe9wkGcbcUgaRbReEzurPBYRnbjNB
After a while, this will return a hash similar to the following:
Published to QmZW1zPDKUtTbcbaXDYCD2jUodw9A2sNffJNeEy8eWK3bG:
/ipfs/QmQytFqNoyk8H1ZEaHe9wkGcbcUgaRbReEzurPBYRnbjNB
This shows a new hash, which is the peerID, along with the path that we are publishing to it. We can check that the peerID correctly resolves to our IPFS path, as follows:
$ ipfs name resolve QmZW1zPDKUtTbcbaXDYCD2jUodw9A2sNffJNeEy8eWK3bG
/ipfs/QmQytFqNoyk8H1ZEaHe9wkGcbcUgaRbReEzurPBYRnbjNB
The most recent version of our site will now be available at the following URL (note that we are now using ipns/ instead of ipfs/): https://ipfs.io/ipns/ QmZW1zPDKUtTbcbaXDYCD2jUodw9A2sNffJNeEy8eWK3bG/.
Whenever the files of the frontend are updated, they can be published to the same IPNS path:
$ ipfs add -r dist/
$ ipfs name publish <new_directory_hash>
We can now access the most up-to-date version of our site without needing to know the updated hash associated with our changed dist/ directory.
Our IPNS hash, however, is still not very user-friendly. Therefore, the next logical step would be to bind our IPNS file path to a static domain, something which can be done by purchasing a domain and making that domain publicly known. Doing this, would require us to alter the DNS TXT record associated with our domain, to point to our IPNS URL.
There is one problem with this: it increases our reliance on centralization. Using a centralized DNS server to resolver our site’s address would defeat the object of using IPFS and IPNS to begin with. For now, this isn’t something that we will do.
Our site is now fully decentralized, and we have published our files on IPNS. In the next section, we will explore the equivalent operations with Swarm.
Serving your frontend using Swarm
The method for adding our site to Swarm is almost identical to the previous method. With Swarm running, as outlined earlier, we will use our existing dist/ directory, along with the –recursive flag.
In addition, we will pass a default path that specifies the file to render in the case of a request for a resource with an empty path, as follows:
$ swarm –defaultpath dist/index.html –recursive up dist/ 2a504aac8d02f7715bea19c6c19b5a2be8f7ab9442297b2b64bbb04736de9699
Having uploaded our files, the site can then be viewed through our local Swarm client using the generated hash, as follows: http://localhost:8500/bzz:/ 2a504aac8d02f7715bea19c6c19b5a2be8f7ab9442297b2b64bbb04736de9699/.
In the same way that we can use a public gateway to view our IPFS-hosted site, we can also use Swarm’s public gateway: http://swarm-gateways.net/bzz:/ 2a504aac8d02f7715bea19c6c19b5a2be8f7ab9442297b2b64bbb04736de9699/.
Our frontend is now deployed to Swarm, but, as with the case for IPFS, our public-facing URL isn’t very user-friendly. To solve this problem, we will use the Ethereum Name Service (ENS), which is a distributed and extensible naming system based on the Ethereum blockchain.
ENS
The aim of ENS is to provide a better and more secure user experience when dealing with Ethereum addresses or Swarm hashes. It allows the user to register and associate an address or hash with a more user-friendly string, such as packttoken.eth. This is similar to the way in which DNS maps a user-friendly URL to an IP address.
We will register a test domain on the Rinkeby testnet, which supports .test domains, but not the .eth domains supported by the mainnet and the Ropsten testnet. This is to our benefit—registering a .test domain is a much quicker process than registering the .eth equivalent, as it doesn’t involve going through an auction process. It should be noted that the .test domain expires after 28 days.
ENS is composed of a series of smart contracts, which we will describe briefly, as follows:
- ENS root contract: This contract keeps track of the registrars that control the top- level .eth and .test domains, and of which resolver contract to use for which domain.
- Registrar contracts: These contracts are similar to their DNS namesakes, and are responsible for administering a particular domain. There is a separate registrar for the .eth domain on the mainnet, .test on Ropsten, and .test on Rinkeby.
- Resolver contracts: These contracts are responsible for the actual mapping between Ethereum addresses or Swarm content hashes and the user-friendly domain names.
The first step is to register our .test domain on Rinkeby. To begin, we need to download a JavaScript file that contains certain contract ABI definitions and helper functions that will simplify the overall registration process. The file can be found at https://github.com/ ensdomains/ens/blob/master/ensutils-testnet.js.
The contents should be copied into a local file which we will access later.
We will be working on the Rinkeby testnet, but the file that we have downloaded contains the addresses associated with the Ropsten testnet ENS contracts, so we’ll need to change it to point to the equivalent contracts on Rinkeby.
On line 220, change the line to point to the Rinkeby ENS root contract, as follows:
var ens = ensContract.at(‘0xe7410170f87102df0055eb195163a03b7f2bff4a’);
The second change that we need to make is to the address associated with the Rinkeby public resolver contract. On line 1,314, change it to point to the following address:
var publicResolver = resolverContract.at(‘0x5d20cf83cb385e06d2f2a892f9322cd4933eacdc’);
Registering our domain requires a running Geth node connected to the Rinkeby network. If it hasn’t been left running, restart the Geth node using the following command, and allow it to sync to the latest block:
geth –networkid=4 –datadir=$HOME/.rinkeby –rpc –cache=1024 — bootnodes=enode://a24ac7c5484ef4ed0c5eb2d36620ba4e4aa13b8c84684e1b4aab0cebe a2ae45cb4d375b77eab56516d34bfbd3c1a833fc51296ff084b770b94fb9028c4d25ccf@52. 169.42.101:30303
From a second Terminal, we now need to attach to the running client and load the edited
.js file with the abstractions of our Rinkeby ENS contracts:
$ geth attach ~/.rinkeby/geth.ipc
> loadScript(“./ensutils-testnet.js”) true
We will now have access to the relevant functions inside both the registrar and resolver contracts deployed on Rinkeby. First, check that the name you want to use is available:
> testRegistrar.expiryTimes(web3.sha3(“packt_ico”)) 0
This will return a timestamp equal to the time at which the name expires. A zero timestamp means that the name is available.
We can then register the name with the registrar contract, first ensuring that we have a funded account that our Geth client can access:
> testRegistrar.register(web3.sha3(“packt_ico”), eth.accounts[0], {from: eth.accounts[0]}) “0xe0397a6e518ce37d939a629cba3470d8bdd432d980531f368449149d40f7ba92”
This will return a transaction hash that can be checked on EtherScan, for inclusion in the blockchain. Once included, we can query the registrar contract to check the expiry time and owner account:
> testRegistrar.expiryTimes(web3.sha3(“packt_ico”)) 1538514668
> ens.owner(namehash(“packt_ico.test”)) “0x396ebfd1a0ec6e6cefe6035acf487900a10fcf56”
We now own an ENS domain name, but it doesn’t yet point to anything useful. To do that, we need to use the public resolver contract whose address we also added to ensutils- testnet.js.
The next step is to create a transaction to the public resolver contract, in order to associate our Swarm file hash with our new domain name. Note that 0x must be added to the front of our hash in order for the contract to parse it correctly:
> publicResolver.setContent(namehash(“packt_ico.test”), ‘0x2a504aac8d02f7715bea19c6c19b5a2be8f7ab9442297b2b64bbb04736de9699’,
{from: eth.accounts[0]}); “0xaf51ba63dcedb0f5c44817f9fd6219544a1d6124552a369e297b6bb67f064dc7”
So far, we have registered our domain name with the public registrar contract and set a public resolver to map our domain name to the Swarm hash.
The next connection to make is to tell the ENS root contract the address of the resolver to use for our domain name:
> ens.setResolver(namehash(“packt_ico.test”), publicResolver.address,
{from: eth.accounts[0]}) “0xe24b4c35f1dadb97b5e00d7e1a6bfdf4b053be2f2b78291aecb8117eaa8eeb11”
We can check whether this has been successful by querying the ENS root contract:
> ens.resolver(namehash(“packt_ico.test”)) “0x5d20cf83cb385e06d2f2a892f9322cd4933eacdc”
The final piece of the puzzle is to tell our local Swarm client how to find and use the correct resolver contract. To do this, we need to start it by using the –ens-api option, which tells Swarm how to resolve ENS addresses. In this case, we pass an IPC path connecting to our Geth client, which is itself connected to the Rinkeby network where our ENS contract resides.
As a part of this command, we also pass the address of Rinkeby’s ENS root contract:
swarm –bzzaccount <your_rinkeby_account> –ens-api 0xe7410170f87102df0055eb195163a03b7f2bff4a@/home/<your_home_directory>/.rin keby/geth.ipc –datadir ~/.rinkeby
Our site can now be viewed at the following local URL: http://localhost:8500/bzz:/ packt_ico.test/.
If we wanted to make our new domain publicly accessible, rather than just accessible on our local machine, we would need to register a .eth domain on the mainnet. At present we are accessing our website using our .test domain through our local Swarm client, which we’ve connected to our Rinkeby Geth client.
The public Swarm gateway, however, is connected to the Ethereum mainnet, so it can only access the mainnet ENS contracts, and not those on Rinkeby.
IPFS file uploader project
In the second part of this recipe, we will take a more detailed look at how we can use IPFS’s HTTP API programmatically. To do so, we will create a simple HTML and JavaScript web page from which we can upload files to IPFS directly, without the need to first upload the files to a backend server.
To do this, we will use the JS-IPFS-API JavaScript library, which will allow a browser-based frontend to communicate directly with the local IPFS node that we created earlier in the recipe. Note that this API library should not be confused with JS-IPFS, which is a JavaScript implementation of the main IPFS protocol.
The aim here is not to explore the HTTP API completely, but to provide a basic example of how it can be used. To learn more about the methods made available by the API, the relevant documentation should be consulted, at https://github.com/ipfs/js-ipfs-api.
Project setup
First, we need to run IPFS by using the appropriate configuration, which will differ from the configuration that we used earlier in the recipe. If IPFS is still running from earlier, first, stop the process.
Next, we need to apply some configurations that will allow IPFS to perform cross-origin requests, or Cross-Origin Resource Sharing (CORS). This will allow our web page, which will be run by a local web server on its own local domain, to interact with the local IPFS gateway, itself hosted locally on a different domain:
$ ipfs config –json API.HTTPHeaders.Access-Control-Allow-Methods ‘[“GET”, “POST”, “PUT”, “OPTIONS”]’
$ ipfs config –json API.HTTPHeaders.Access-Control-Allow-Origin ‘[“*”]’
Having done this, we can start IPFS again by using the ipfs daemon command.
For the purposes of this project, we will run our web page on a local development web server. An exercise for the reader would be to follow the instructions earlier in this recipe to upload the site itself to IPFS, thereby giving us an IPFS-hosted IPFS file uploader!
First, let’s create our project directory structure:
$ mkdir packtIpfsUploader
$ cd packtIpfsUploader
We can initialize npm and walk through the suggested values:
npm init
Then, we’ll install the web server package:
npm install –save lite-server
And finally, we’ll edit the package.json file to include a way to easily start the server:
…
“scripts”: {
“test”: “echo \”Error: no test specified\” && exit 1″, “dev”: “lite-server”
},
…
The web page
Our web site will be a single page, created from two files: an index.html file and
a main.js JavaScript file. It will have a single input for specifying the local file to upload, and a button to initiate the upload.
Once uploaded, a link to the file on IPFS will be shown on the page, as shown in the following screenshot:
User interface following a successful file upload
index.html
Our HTML will clearly be very simple. The styling will be provided by Bootstrap, with any additional minor styling being declared inline, rather than in a separate CSS file. Our HTML file will pull in the following dependencies from external CDN sources:
- Bootstrap: Both the CSS and JS components
- jQuery: Required by Bootstrap
- IPFS API: Required to interact with our local IPFS client
- Buffer: Required to convert our file data into buffer, so that it can be passed to IPFS
The file itself is perhaps not interesting enough to explore in detail, so it is shown in full as follows, without further explanation:
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<meta http-equiv=”X-UA-Compatible” content=”ie=edge”>
<link href=”https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min. css” rel=”stylesheet” integrity=”sha384- MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO” crossorigin=”anonymous”>
<title>IPFS File Uploader</title>
</head>
<body>
<div class=”container” style=”width: 650px”>
<div class=”col-lg-12″>
<h1 class=”text-center” style=”margin-top: 100px”> IPFS File Uploader
</h1>
<hr />
</div>
<div class=”input-group”>
<div class=”custom-file”>
<input type=”file” class=”custom-file-input” id=”fileToUpload”>
<label class=”custom-file-label” for=”fileToUpload”> Choose file…
</label>
</div>
<div class=”input-group-append”>
<button
class=”btn btn-outline-secondary” type=”button” onclick=”uploadFile()”>
Upload
</button>
</div>
</div>
<hr />
<div id=”filePath” style=”display: none;”> File uploaded to:
<a id=”ipfsUrl” href=””>
<span id=”ipfsUrlString”></span>
</a>
</div>
</div>
<script src=”https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js”></scrip t>
<script src=”https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js ” integrity=”sha384- ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy” crossorigin=”anonymous”></script>
<script src=”https://wzrd.in/standalone/buffer”></script>
<script src=”https://unpkg.com/ipfs-api@9.0.0/dist/index.js” integrity=”sha384-5bXRcW9kyxxnSMbOoHzraqa7Z0PQWIao+cgeg327zit1hz5LZCEbIMx/L WKPReuB”
crossorigin=”anonymous”></script>
<script src=”main.js”></script>
</body>
</html>
main.js
Our JavaScript code is more interesting, and it is where the interaction with IPFS takes place. The main work happens in the function that we will pass to the HTML button as the onclick handler, uploadFile(). This function performs the following tasks:
- Reads the file into a raw binary data buffer using FileReader
- Initializes an ipfs object and binds it to our IPFS client by calling the IpfsApi constructor
- Adds the file to IPFS using the ipfs.files.add() method
- Uses the resulting hash to create a URL that can be output to the user
The following is the full file, showing the commented code. Along with the main upload function, the file contains a small amount of jQuery code, used in parsing the filename from the input and in showing the resulting IPFS URL:
// Get a reference to the file path from the HTML. const filePath = $(‘#filePath’);
// Change the string displayed in the input to reflect the selected file.
$(‘#fileToUpload’).on(‘change’,function(){
let fileName = $(this).val().split(‘\\’).pop();
$(this).next(‘.custom-file-label’).html(fileName); filePath.hide();
});
function uploadFile() {
// Create a new FileReader instance to read the file. const reader = new FileReader();
// Define a function to be called once reading the file is complete. reader.onloadend = () => {
// Call the IpfsApi constructor as a method of the global window object.
const ipfs = window.IpfsApi(‘localhost’, 5001);
// Put the file data into a buffer that IPFS can work with. const buf = buffer.Buffer(reader.result);
// Add the file buffer to IPFS, returning on error. ipfs.files.add(buf, (err, result) => {
if (err) { console.error(err); return
}
// Form the IPFS URL to output to the user.
const outputUrl = `https://ipfs.io/ipfs/${result[0].hash}`; const link = document.getElementById(‘ipfsUrl’);
link.href = outputUrl; document.getElementById(“ipfsUrlString”).innerHTML= outputUrl;
// Show the URL to the user. filePath.show();
});
};
// Get the file from the HTML input.
const file = document.getElementById(“fileToUpload”);
// Read the file into an ArrayBuffer, which represents the file as a
// fixed-length raw binary data buffer. reader.readAsArrayBuffer(file.files[0]);
}
Once the files have been created inside of the project directory, we can run the project by starting the web server:
npm run dev
This will serve the page locally, at http://localhost:3000, or on the next available port (if port 3000 is taken).
From here, clicking on Input will open a file browser, from which a local file can be selected. Clicking on Upload will then add the file to IPFS via our IPFS node, and will display the URL at which the file can be viewed via the public IPFS gateway.
Our file uploader is now running on our local machine. To make our file uploader available to the public, we can either host it on a centralized hosting platform, or, as mentioned earlier, push the frontend files to IPFS itself.
Conclusion
In this recipe, we introduced the idea of complimenting a decentralized blockchain data layer with a decentralized storage layer, in the form of either IPFS or Swarm. We described the installation and basic uses of both technologies.
Finally, we introduced a very simple example of how the IPFS API can be used programmatically.
If you like to explore blockchain development with an alternative platform like Hyperledger or learn about the projects of Hyperledger like Sawtooth or Iroha, visit Comprehensive Hyperledger Training Tutorials page to get the outline of our Hyperledger articles.
To conclude this recipe, we like to recommend Blockchain Hyperledger Development in 30 hours, Learn Hands-on Blockchain Ethereum Development & Get Certified in 30 Hrs, and Become Blockchain Certified Security Architect in 30 hours courses to those interested in pursuing a blockchain development career. This recipe is written by Brian Wu who is our senior Blockchain instructor & consultant in Washington DC. His books (Blockchain By Example and Hyperledger Cookbook) are highly recommended for learning more about blockchain development.