Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different performance for computed value written in different ways. #375

Closed
kirillsalikhov opened this issue Jun 29, 2016 · 10 comments
Closed

Comments

@kirillsalikhov
Copy link

I try to make a simple app which filters json file with ~3000 records. I have slow performance(~2-3s) if I use filter function written with _.reduce and closures inside @computed. And everything works fast if I write it in simpler form(~40ms).

json file looks like:

[
  {
   "ID": "3450",
    "Corpus": "1",
    "Section": "1",
    "Floor": 25,
    "FlatNumber": 282, 
  },
]

my model class:

import {observable, computed} from 'mobx';
import _ from 'lodash';

var startsWith = (filter, name) => (flat) => {
    if (!filter[name]) return true;

    return _.startsWith(flat[name] + '', filter[name]);
};

class FlatsModel {

    flats = [];

    @observable filter = {
        Corpus: '',
        Floor: '',
        Section: ''
    };

    filterFns = {
        'Corpus': startsWith(this.filter,'Corpus'),
        'Floor': startsWith(this.filter, 'Floor'),
        'Section': startsWith(this.filter, 'Section')
    };

    filterFn = (flat) => {
        return _.reduce(this.filterFns, function(result, fn) {
            return result && fn(flat);
        }, true);
    };

    @computed get filtered() {
       //SLOW
        return this.flats.filter(this.filterFn);
    }

    populate(flats) {
        this.flats = flats;
    }

    setFilter(name, value) {
        this.filter[name] = value;
    }
}

export default FlatsModel;

view is simple

import React from 'react';
import {observer} from 'mobx-react';

import Row from './row';

@observer
class List extends React.Component {    
    render() {
        var {model} = this.props;
        return (
            <table>
                <tbody>
                    {model.filtered.slice(0,5).map(flat => <Row flat={flat} key={flat.ID}/>)}
                </tbody>
            </table>
        )
    }
}

export default List;

If I rewrite get filtered() like below, everything works fine:

@computed get filtered() {
    var filter = this.filter;        
    return this.flats.filter(function(flat) {
        var val = true;
        if (filter['Corpus']) {
            val = val && _.startsWith(flat['Corpus'] + '', filter['Corpus'])
        }        
        if (filter['Floor']) {
            val = val && _.startsWith(flat['Floor'] + '', filter['Floor'])
        }        
        if (filter['Section']) {
            val = val && _.startsWith(flat['Section'] + '', filter['Section'])
        }
        return val;
    });
}

I'm not sure if I used @computed the right way
Are there any ways to use complex functions inside @computed ?

@mweststrate
Copy link
Member

This might very well relate to #360.

Another reason that your latter solution might be faster is the fact that you have a local copy of filter in the closure. But note that your implementations are actually quite different; I think it is good to compare both solutions un-MobX-ed to each other, to have a baseline and we can see if MobX adds relatively more overhead in the first case. (given your numbers I suspect this is the case indeed, but just to make sure)

@kirillsalikhov
Copy link
Author

I made sandbox https://github.com/kirillsalikhov/flat-finder with un-MobX-ed versions (if I correctly understood you). Results on my computer are:
plain-fn1: 42.580ms (.reduce and closures)
plain-fn2: 21.845ms
mobx-fn1: 2791.332ms (
.reduce and closures)
mobx-fn2: 26.898ms

@mweststrate
Copy link
Member

Thanks, that is really useful! I'll profile and fix this performance drop.

Op wo 29 jun. 2016 om 22:41 schreef kirillsalikhov <notifications@github.com

:

I made sandbox https://github.com/kirillsalikhov/flat-finder with
un-MobX-ed versions (if I correctly understood you). Results on my computer
are:
plain-fn1: 42.580ms (

.reduce and closures) plain-fn2: 21.845ms mobx-fn1: 2791.332ms (.reduce
and closures)
mobx-fn2: 26.898ms


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#375 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/ABvGhMIdB4YnZJcgaX6nAOVNt58m7-_Vks5qQtiHgaJpZM4JBChr
.

@nyura123
Copy link

nyura123 commented Jun 30, 2016

I think this might be related to #250 - accessing the same observable value (in this case, this.filter or filter['Corpus'/'Floor'/'Section']) multiple times within the same derivation.
There's an optimization that prevents some of the duplication (in reportObserved) but it only kicks in if you access observables in a certain order.

The following change seems to make the non-closure mobx filtering behave the same as the closure one, because it accesses the observables in the same way: this.filter accessed only once, then each filter[name] only accessed if the flat isn't already filtered out.

filterFn2 = (filter, flat) => {
        var val = true;
        if (filter['Corpus']) {
            val = val && _.startsWith(flat['Corpus'] + '', filter['Corpus'])
        }

        if (val && filter['Floor']) {
            val = val && _.startsWith(flat['Floor'] + '', filter['Floor'])
        }

        if (val && filter['Section']) {
            val = val && _.startsWith(flat['Section'] + '', filter['Section'])
        }
        return val;
    };

    @computed get filtered() {
        var filterFn = this[this.filterName];
        const filter = this.filter;
        return this.flats.filter((flat)=>filterFn(filter, flat));
    }```

@mweststrate
Copy link
Member

Correct. I experimented a bit with it yesterday, and simply using a hashset
seems to completely remove this bottleneck. It is a little bit slower for
simple computations, but it removes the performance drop for complex ones
like this one completely. Unexpectedly I will be speaking this Sunday at
RuhrJS, but after that I'll definitely gonna fix this.

Op do 30 jun. 2016 om 20:51 schreef nyura123 notifications@github.com:

I think this might be related to #250
#250 - accessing the same
observable value (in this case, this.filter or
filter['Corpus'/'Floor'/'Section']) multiple times within the same
derivation.
There's an optimization that prevents some of the duplication (in
reportObserved) but it only kicks in if you access observables in a
certain order.

The following change seems to make the non-closure mobx filtering behave
the same as the closure one, because it accesses the observables the same
way: this.filter accessed only once, then each filter[name] only accessed
in the flat isn't already filtered out.

filterFn2 = (filter, flat) => {

var val = true;
if (filter['Corpus']) {
val = val && _.startsWith(flat['Corpus'] + '', filter['Corpus'

])
}

    if (val && filter['Floor']) {
        val = val && _.startsWith(flat['Floor'] + '', filter['Floor'])
    }

    if (val && filter['Section']) {
        val = val && _.startsWith(flat['Section'] + '', filter['Section'])
    }
    return val;
};

@computed get filtered() {
    var filterFn = this[this.filterName];
    const filter = this.filter;
    return this.flats.filter((flat)=>filterFn(filter, flat));
}```


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#375 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/ABvGhMFiaClVHVtW_uvCMwN0CXqUVQdjks5qRBAggaJpZM4JBChr
.

@hellectronic
Copy link
Contributor

Did not know about this conference :S. I live next to where it takes place.

@mweststrate
Copy link
Member

There are 3 tickets left :-) And some tickets are being sold on twitter.
I'll bring MobX swag ;-)

Op vr 1 jul. 2016 om 16:00 schreef hellectronic notifications@github.com:

Did not know about this conference :S. I live next to where it takes place.


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#375 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/ABvGhOWBYqHJ0EXBR669HJk6ay_ZDw8qks5qRR2FgaJpZM4JBChr
.

mweststrate added a commit that referenced this issue Jul 15, 2016
@mweststrate
Copy link
Member

Started optimizing for these kind of situations, initial results are very promising :) On the current MobX performance test suite:

Before:
52.72user 0.94system 0:53.07elapsed 101%CPU (0avgtext+0avgdata 925828maxresident)k

After:
9.97user 0.19system 0:09.74elapsed 104%CPU (0avgtext+0avgdata 249520maxresident)k

@mweststrate
Copy link
Member

@kirillsalikhov the follow build should address this issue and bring the performance back to proportions (with constant instead of exponential overhead). Would you mind verifying that? mobx@2.4.0-alpha1

Thanks!

@kirillsalikhov
Copy link
Author

Every sample performs almost the same time.
Thank you for great library!

mweststrate added a commit that referenced this issue Jul 20, 2016
* Store timing results when running perf tests

for easy comparison

* First naive set implementation

* use real sets -> 10 secs

* some notes

* Committed perf results with native sets

* simple universal set implementation

* all tests succeed again

* Fixed logging of sort tests

* faster bind dependencies

* added LRU cache

* Better test setup. Perf overhead seems linear now

* smarter try finally

* optimized use last-accessed-cache

* some code simplification

* smarter diff

* even smarter loops :)

* use array for observing, 5% faster, 25% less mem

* made sort test trickier

* Reserve observing buffer

* faster array allocations, ES5 map impl

* several fixes and improvements

* chores

* Fixed some autorun clean up stuff

* renames

* Fixed a bug in the dependency diffing algorithm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants