Skip to content
This repository has been archived by the owner on Feb 26, 2020. It is now read-only.

Download network dapps and load them from filesystem #144

Merged
merged 15 commits into from
Jul 9, 2018

Conversation

axelchalon
Copy link
Contributor

@axelchalon axelchalon commented Jun 21, 2018

Network dapps are now completely handled by Parity UI.

Uses parity-js/ui#10

  • Fetch icon hash, manifest hash, content hash from registry
  • Resolve to a URL using GitHub Hint
  • Download file as ~/.config/parity-ui/hashfetch/partial/<hash>.part
  • Once the download is finished, move it directly to ~/.config/parity-ui/hashfetch/files/<hash> if it's a regular file ; if it's a zip archive, extract it temporarily to ~/.config/parity-ui/hashfetch/partial-extract/<hash> and then move it to ~/.config/parity-ui/hashfetch/files/<hash>

Manifests and icons are fetched on the dapps list page (Dapps); the dapps themselves (their contents) are fetched on demand, when the user wants to open the dapp. Added a loading screen if the dapp takes more than 1.5 second to load (we then assume it's a network dapp and that it's being downloaded).


Tomek advised me to set an upper limit for the downloads :

The upper limit is just to prevent your node from downloading some super huge zip file that may crash the node (or just generate a lot of load on the target server - so pretty much a ddos attack)

I set 20MB as max file size to download, regardless of whether it's a regular file (manifest or icon) or a dapp zip archive.

Tomek's reply on my suggestion to blacklist the URL locally if the download fails (to prevent DDoSing a target victim server)

We can't really blacklist the destination. Maybe we can do that temporarily cause the file on the server might change in the future, same if the hash does not match
actually, given what we have seen with the uploader we should probably ban it temporarily with exponential increase so like, ban for 30 seconds, then 60s, 120s, and so on

In case of hash mismatch or file too big (not in case of connection error though), we register the timestamp of the failed attempt, and the next time we request this hash with this url, we check if enough time has elapsed. Failed attempts history is stored in hashfetch/failed_history.json


Could only test on Kovan. Would be nice to compare master and this branch on the mainnet to see if there are differences (in the dapps displayed, rejected, working..)

Other notes:

  • Dapp contents whose GitHub Hint entry points to a single file are rejected (we expect a zip archive)
  • Dapp contents whose GitHub Hint entry pattern designates the URL to a regular file (commit 0x0000000000000000000000000000000000000000) rather than the URL to a zip archive (commit 0x0000000000000000000000000000000000000001) are rejected
  • @amaurymartiny any idea of stuff that is obsolete now that we handle dapps completely on the Parity UI side? maybe the proxies? maybe in there https://github.com/parity-js/shell/blob/master/.gitlab-ci.yml?

Let me know what you think!

(cc @tomusdrw)

@axelchalon axelchalon added the A0-pleasereview Pull request needs code review. label Jun 21, 2018
node: {
fs: 'empty'
},
target: 'electron-renderer',
Copy link
Contributor Author

@axelchalon axelchalon Jun 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise in src/ I couldn't use modules that used require('fs') (the latter would return undefined), even though window.require('fs') worked fine in my src/ files.

This also fixes having to use window.require instead of require in src/

Not sure if there are consequences to this (some warnings and errors occur during bundling -- CI doesn't pass because of this)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, the react app is exactly in the renderer process. I'll have a look at the CI error, building locally works fine.


// We store the URL to allow GH Hint URL updates for a given hash
// to take effect immediately, and to disregard past failed attempts at
// getting this file from other URLs.
Copy link
Contributor Author

@axelchalon axelchalon Jun 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attempts are keyed on hash+url right now.

Should we? It's a DDoS vector if the target host is the same. If the attacker registers (some hash, "http://victim-website.com/") on GH Hint and then every few minutes changes the URL to http://victim-website.com/?x http://victim-website.com/?xx http://victim-website.com/?xxx then ExpoRetry isn't used at all..
Or is this a non-issue because of costs etc.?

I'm thinking in particular if somebody registers a dapp with the icon or manifest URL in GitHub Hint set to some big file like http://victim-website.com/bigfile.zip
Parity UI tries to download the icons and manifests of network daps each time the user goes to the dapps list (homepage). If the download failed (file too big), ExpoRetry blacklists {url,hash} exponentially; however this exponential increase will be reset each time the URL updates on GH Hint. If the attacker regularly updates the URL to http://victim-website.com/bigfile.zip?x etc. then victim website has a bad time.

This could be fixed if we key the attempts based on the hash only (although the attacker could regularly update the manifest/icon hash in the dapp registry..). What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@tomusdrw tomusdrw Jul 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry just noticed this comment today, can we maybe use ExpoRetry on hash, not the destination?

That said, modifying the entry is not really something that we should consider - it will always be possible either by modifying the URL or the hash itself.

@@ -45,18 +49,10 @@ function createWindow () {

mainWindow = new BrowserWindow({
height: 800,
width: 1200
width: 1200,
webPreferences: { nodeIntegrationInWorker: true }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nodeintegration is enabled for the BrowserWindow but disabled by default for its WebWorkers; I had to add this option to remove the following error (wasn't blocking though):

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be fine, we are not requiring any external files as web workers.

Is the error stack longer? Who is requiring @parity/shared/lib/webWorker.js? I'd be more enthusiast to remove web workers altoghether.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's the whole error stack

we import some files of @parity/shared (for example import { initStore } from '@parity/shared/lib/redux' in index.parity.js) that require the worker of @parity/shared (in that case, initStore calls setupWorker). I imagine that's where it's coming from

@axelchalon
Copy link
Contributor Author

axelchalon commented Jun 24, 2018

Does this need to be updated?

shell/electron/index.js

Lines 92 to 104 in 5ad324f

session.defaultSession
.setPermissionRequestHandler((webContents, permission, callback) => {
if (!webContents.getURL().startsWith('file:')) {
// Denies the permissions request for all non-file://. Currently all
// network dapps are loaded on http://127.0.0.1:8545, so they won't
// have any permissions.
return callback(false);
}
// All others loaded on file:// (shell, builtin, local) can have those
// permissions.
return callback(true);
});

Also, this becomes outdated:

shell/src/inject.js

Lines 91 to 93 in 5ad324f

// For now we simply allow eval(). All builtin dapps are trusted and can use
// eval(), and all network dapps are served on 127.0.0.1:8545, which have CSP
// that disallow eval(). So security-wise it should be enough.

Copy link
Contributor

@tomusdrw tomusdrw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

if (app.type === 'network') {
return HashFetch.get().fetch(this._api, app.contentHash, 'dapp')
.then(appPath => {
app.localUrl = `file://${appPath}/index.html`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it work on windows? I thought that /// is required

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! Need to check

const manifestHash = bytesToHex(manifestId).substr(2);

return Promise.all([
HashFetch.get().fetch(api, imageHash, 'file').catch((e) => { console.warn(`Couldn't download icon for dapp ${appId}`, e); }),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will resolve the promise to undefined in case of error, is that ok?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this is expected. we want to still display the dapp if the icon fetch failed. the only consequence is that the icon url will be file://undefined

load () {
const filePath = this._getFilePath();

return fsExists(filePath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd skip fsExists, it's the same as just reading the file.


return fsWriteJson(this._getFilePath(), this.failHistory);
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are not handling errors here it seems that if it fails once it will never be attempted again, right? Should we at least print a warning if that happens?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! (or rather, missing catch :P) I don't think we need to never attempt writing again if it failed once. I'll add a catch

}

function checkHashMatch (hash, path) {
return fsReadFile(path).then(content => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will read the whole file to memory, might be better to just hash without allocating. In general we should avoid reading the files that were fetched over the network, better to just stream them if it's possible.

const finalPath = path.join(getHashFetchPath(), 'files', hash);

return download(url, tempPath)
// Don't register a failed attempt if the download failed; the user might be offline.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should actually distinguish between:

  1. User offline
  2. Size limit reached
  3. Server returned non-200 response

We can only skip registering failed attempt in the first case, but skipping it in the other two opens up a ddos vector

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I think it's easier for now to register the failed attempt in any case, and assume the user is online.

});
}

function download (url, destinationPath) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will happen if the url is file:///etc/passwd?

Copy link
Contributor Author

@axelchalon axelchalon Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

download is only called by hashDownload, which is called with a url that is guaranteed to start with http:

if (commit === '0x0000000000000000000000000000000000000000') { // The slug is the URL to a file
if (!slug.toLowerCase().startsWith('http')) { throw new Error(`GitHub Hint URL ${slug} isn't HTTP/HTTPS.`); }
url = slug;
zip = false;
} else if (commit === '0x0000000000000000000000000000000000000001') { // The slug is the URL to a dapp zip file
if (!slug.toLowerCase().startsWith('http')) { throw new Error(`GitHub Hint URL ${slug} isn't HTTP/HTTPS.`); }
url = slug;
zip = true;
} else { // The slug is the `owner/repo` of a dapp stored in GitHub
url = `https://codeload.github.com/${slug}/zip/${commit.substr(2)}`;
zip = true;
}

I'll add a condition in the download function for future-proofing

httpx.get(url, response => {
var size = 0;

response.on('data', (data) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we reject the promise .on('error') (either read error or fs write error?)

Copy link
Contributor

@amaury1093 amaury1093 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works like a charm on Kovan! I'm syncing mainnet now

@@ -45,18 +49,10 @@ function createWindow () {

mainWindow = new BrowserWindow({
height: 800,
width: 1200
width: 1200,
webPreferences: { nodeIntegrationInWorker: true }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be fine, we are not requiring any external files as web workers.

Is the error stack longer? Who is requiring @parity/shared/lib/webWorker.js? I'd be more enthusiast to remove web workers altoghether.

const fsReadFile = util.promisify(fs.readFile);
const fsUnlink = util.promisify(fs.unlink);
const fsReaddir = util.promisify(fs.readdir);
const fsRmdir = util.promisify(fs.rmdir);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw you included fs-extra in package.json, great idea. In this case you can fs-extra directly, it includes fs and is promise-based if no callback is passed.

node: {
fs: 'empty'
},
target: 'electron-renderer',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, the react app is exactly in the renderer process. I'll have a look at the CI error, building locally works fine.

@amaury1093
Copy link
Contributor

amaury1093 commented Jul 2, 2018

  1. Running npm run ci:build gives the following error.
ERROR in bundle.js from UglifyJs
Invalid assignment [../node_modules/universalify/index.js:7,0][bundle.js:34656,43]

It's fs-extra which requires universalify. See RyanZim/universalify#6. Or a quicker hack might be to exlude universalify in the Uglify plugin

  1. Re

    shell/src/inject.js

    Lines 91 to 93 in 5ad324f

    // For now we simply allow eval(). All builtin dapps are trusted and can use
    // eval(), and all network dapps are served on 127.0.0.1:8545, which have CSP
    // that disallow eval(). So security-wise it should be enough.

If we merge this PR, all dapps will be able to use eval(), which should be avoided. Ideally, in this same file, we'd be able to get the manifest.json of the dapp, @axelchalon do you think that's doable? If yes, then disable eval unless there's an allowEval field.

  1. Re

    shell/electron/index.js

    Lines 92 to 104 in 5ad324f

    session.defaultSession
    .setPermissionRequestHandler((webContents, permission, callback) => {
    if (!webContents.getURL().startsWith('file:')) {
    // Denies the permissions request for all non-file://. Currently all
    // network dapps are loaded on http://127.0.0.1:8545, so they won't
    // have any permissions.
    return callback(false);
    }
    // All others loaded on file:// (shell, builtin, local) can have those
    // permissions.
    return callback(true);
    });

We can probably remove this piece of code, as it's less a security issue. The only permission we're using that I can think of is the signer requiring camera access for Parity Signer. Again, same point as above, if the manifest.json is easily fetchable here, it'd be easier to allow all permissions for builtin dapps, and disallow for network ones.

@5chdn
Copy link

5chdn commented Jul 2, 2018

How realistic is it we get this through this week?

@axelchalon
Copy link
Contributor Author

axelchalon commented Jul 3, 2018

@amaurymartiny

re 1. excluding universalify from the uglify plugin doesn't seem to make a difference (or maybe I didn't exclude it correctly). I tried

new webpack.optimize.UglifyJsPlugin({
        sourceMap: true,
        screwIe8: true,
        compress: {
          warnings: false
        },
        output: {
          comments: false
        },
+       exclude: /universalify/
      })

re 2. off the top of my head: we don't really have easy access to the manifest fields here.

re 3. same here. According to https://electronjs.org/docs/tutorial/security#4-handle-session-permission-requests-from-remote-content, By default, Electron will automatically approve all permission requests unless the developer has manually configured a custom handler, which is very worrying (however, strangely enough, I couldn't get permissions to ever be approved as far as I tested -- tried geolocation and camera --, with custom approving handler or without). Anyway: I don't think the Signer is a webview? So I suggest we deny all permissions for webviews (dapps); this is what I did in the latest commit. (once again, not sure if this has any effects; all permissions seem to be rejected on my side)

@axelchalon
Copy link
Contributor Author

axelchalon commented Jul 3, 2018

@5chdn I addressed the grumbles; last few commits need reviewing. Branch needs some testing to make sure it works fine (also on windows, and on the mainnet). CI still doesn't pass. Apart from that I think it's good to go.

@amaury1093
Copy link
Contributor

Concerning 3- yes, disabling all permissions for webviews seems reasonable.

@5chdn
Copy link

5chdn commented Jul 5, 2018

CI passed :)

@amaury1093 amaury1093 added A8-looksgood Pull request is reviewed well. and removed A0-pleasereview Pull request needs code review. labels Jul 9, 2018
@amaury1093 amaury1093 merged commit c5f3207 into master Jul 9, 2018
@amaury1093 amaury1093 deleted the ac-registry-dapps branch July 9, 2018 13:04
@amaury1093
Copy link
Contributor

Even works well with --light

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A8-looksgood Pull request is reviewed well.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants