Skip to content
This repository has been archived by the owner on Aug 15, 2019. It is now read-only.

Releases: tensorflow/tfjs-core

TensorFlow.js Core API

30 Mar 14:13
Compare
Choose a tag to compare

This release brings some exciting news: deeplearn.js is joining the TensorFlow family and is being renamed to TensorFlow.js Core.

It is just one part of a larger ecosystem of tools surrounding TensorFlow in JavaScript, called TensorFlow.js. TensorFlow.js Core contains the low-level linear algebra ops as well as an eager layer for automatic differentiation. We now also provide a high-level layers API on top of Core called TensorFlow.js Layers.

For convenience, we've packaged TensorFlow Layers and TensorFlow Core under a single union package, which you can find here: https://github.com/tensorflow/tfjs

Since TensorFlow.js is a continuation of deeplearn.js, the first version of the union package and the Core API will be 0.6.0.

What does this mean for existing users of deeplearn.js?

  • To use TensorFlow.js, you will have to update your imports. We recommend using the union package if you don't care about bundle size.
// Import of union package, also contains layers API
import * as tf from '@tensorflow/tfjs';
const a = tf.tensor2d([1, 2]);

// Also contains layers API:
const model = tf.sequential();
model.add(tf.layers.dense({inputShape: [1], units: [1]});

If you want to just use the core API without any of the layers API, you can import the core library directly:

// Import of tfjs-core only
import * as tfc from '@tensorflow/tfjs-core';
const a = tfc.tensor2d([1, 2]);

These are both replacements for the old way to import deeplearn.js:

// Old import
import * as dl from 'deeplearn';
const a = dl.tensor2d([1, 2]);

What's new?

  • tf.tidy() no longer supports receiving asynchronous functions because they result in a serious bug with memory management as GPU memory allocations and deallocations may become interleaved.
  • Tensor.toString has been added. It returns a numpy-like rendering of the Tensor values as a string. You can also use Tensor.print() which is a convenience function for console.log(tensor.toString()).
  • Gradients added for: tf.slice, tf.pad, tf.reverse(), tf.tile().
  • Adds tf.dispose(...) which disposes multiple tensors.
  • Added nesterov flag to the momentum optimizer.
  • Allow floating point exponents in tf.pow().

What's changed?

  • tf.conv2d signature has changed. This should only affect you if you are using dimRoundingMode.
// Before
tf.conv2d(x, filter, strides, pad, dimRoundingMode?)
// After
tf.conv2d(x, filter, strides, pad, dataFormat?, dilations?, dimRoundingMode?)
  • Scalar / Tensor1D / Tensor2D / Tensor3D / Tensor4D are now just types in TypeScript. They do not exist in JavaScript.

What's removed?

We've deleted all of the code we deprecated in the 0.5.0 release

  • CheckpointLoader and dumper has been deleted. We now have automatic tools for model porting. For those who still depend on this class, we've moved the loader to a separate repo (all of the instructions are there): https://github.com/PAIR-code/deeplearnjs-legacy-loader.
  • NDArray is deleted from the codebase.
  • Graph mode is deleted. All of the functionality of Graph mode is available in eager.
  • NDArrayMath is deleted. This is replaced with top-level functions on tf. If you want a different backend, use tf.setBackend('webgl'|'cpu');

For more information about what TensorFlow.js, check out the TensorFlow.js website.

Acknowledgements

Thanks again to all our amazing contributors for making this happen. Thanks @manrajgrover for your continued work on the project, fixing documentation bugs, updating optimizers to use chaining for variables, adding log1p, adding nesterov to the momentum optimizer. Thanks @oveddan for your incredible contributions to conv2d, adding dilation so we can support atrous convolutions. Thanks @meganetaaan for fix to sample code, @OliverLonghi for improving ml_beginners.md, @jbencook for updating squeezenet demo to do image resizing, @rilut for exporting TensorBuffer, @lukebelliveau for fixing a typo in docs, @HanKruiger for fixing a typo in docs, @nbardy for updating the performance.

0.5.0

17 Feb 03:07
Compare
Choose a tag to compare

This release brings some big changes to deeplearn.js. Largest of which is moving from a graph centric API to an ‘eager mode' imperative one that mirrors Tensorflow Eager.

We believe the eager style programming model is more suited to faster development and rapid prototyping. We have also found eager style code easier to debug in the ways we as web developers typically debug things. Another big change is moving more functionality to the top level of the library. All ops are directly exported at the top level now.

These and more changes are detailed below. We'd love to hear your feedback so feel free to make issues on the Github repo if you run into any problems or have other feedback.

What’s New

  • Eager API: This release introduces our new eager API and all optimizers support eager mode. This allows you to write imperative code to do inference and training without a deferred graph execution model. Eager mode also makes debugging easier.

     import * as dl from ‘deeplearn’;
    
      // y = a * x^2 + b * x + c.
     const f = x => a.mul(x.square()).add(b.mul(x)).add(c);
     const loss = (pred, label) => pred.sub(label).square().mean();
    
     const learningRate = 0.01;
     const optimizer = dl.train.sgd(learningRate);
    
     for (let i = 0; i < 10; i++) {
       optimizer.minimize(() => loss(f(xs), ys));
     }
  • Chain API: Tensors now provide a chaining API for ops. Which means you can write

    import * as dl from ‘deeplearn’;
    
    dl.tensor([[2, 3], [5, 1]])
        .mul(dl.scalar(3))
        .square();
  • New Ops: dl.gather, dl.less, dl.logicalAnd, dl.where, dl.range, dl.linspace, dl.logicalXor, dl.logicalNot, dl.stack, dl.squeeze and dl.expandDims.

  • New utility functions: dl.memory, dl.time and dl.nextFrame.

  • Backend selection: Use dl.setBackend(‘webgl’|’cpu’). Also we automatically choose the best backend for you so you will not usually need to call this.

  • New API Docs!: We are super excited about these, we are adding code snippets to our api docs and you can run them inline! Power to the browser!

What’s Changing

  • NDArray has been renamed to Tensor: Tensors are now immutable meaning operations on tensors return new tensors rather than modifying the original tensors. The NDArray identifier will still work in 0.5 but will be removed in future releases.

  • Tensor Creation: We now provide helper functions to create tensors. Use dl.tensor and not NDArray.new. We also have dl.scalar, dl.tensor1d, dl.tensor2d, dl.tensor3d and dl.tensor4d for your convenience.

  • Top Level Ops: All ops are now on the top level object of the library. That means you can now do the following

    import * as dl from ‘deeplearn’;
    
    dl.randomNormal([2,2]).print();
    
  • dl.clone() now does a shallow copy

  • For typescript users: Tensors no longer have a dtype generic. dtype is now only a property of a tensor.

What’s deprecated

This version represents a big shift to a new style API, thus a number of things we are deprecating will be removed in 0.6. Please start upgrading your code now, or lock down your dependencies if you aren't yet ready to upgrade.

  • Graph Mode: With the introduction of eager mode we are deprecating graph mode. The entire graph API will be removed in 0.6.

  • NDArrayMath: As mentioned above, all ops are now at the top level. So no longer use code like this dl.ENV.math.square(). ** If you want to use a different backend use dl.setBackend(). We will be removing the NDArrayMath in 0.6.**

Acknowledgements

Thanks to all our contributors that helped make this happen! Thanks @manrajgrover for your work on optimizers, gradients, improving tests and more, @reiinakano for work on dl.gather, @jaassoon for work on improving our with unit tests, @skwbc for improving error messages in our checkpoint dump script, @gena for work on gradients for dl.sigmoid, @davidsoergel for work on an upcoming dl.contrib.data package @caisq for work on gradients for dl.clipByValue and dl.batchNormalization, @weizhihuang for fixing a bug in one of demos.

We also want to thank the contributors for 0.4.2 for which we didn't do official release notes. Thanks @chaosmail for your work on automatic Caffe model porting, improvements to conv2d, improvements to our testing infrastructure, and for dl.reverse, @Lewuathe for concat backprop, documentation, and better error messaging, @nkreeger for your work on random number generation, array ops, windows tooling support, and the game of life demo, @minsukkahng for fixing graph-mode broadcasting backprop for arithmetic, @shancarter for the awesome fonts demo and for giving us the idea to implement operation chaining, @jameswex for the fonts demo, @experiencor for mobilenet and yolo models, @easadler for conv1d, @LukasDrgon for adding jsdeliver CDN links, @pyu10055 for work on dl.pow and dl.squeeze, @wang2bo2 for work on dl.prelu, @vuoristo for work on softmax gradients, @haasdo95 for fixing a bug in our docs, @jimbojw for helping with docs, @iaroslav-ai for helping with docs

Finally, welcome to the team @tafsiri, @nkreeger and @pyu10055!

0.3.0

09 Oct 20:41
Compare
Choose a tag to compare

API additions

  • 🎉 Added support for iOS 🎉 #109

  • Added optimizers (Adam, AdaGrad, AdaDelta, RMSProp). #170 #102 #183
    Thanks @Lewuathe and @mnottheone for the implementations and @SayHelloToWorld for the bug fixes!

  • Added math.cos, math.tan, math.asin, math.acos, math.atan, math.sinh, math.cosh, math.abs #123

  • Added math.oneHot and math.multinomial #160

  • Added NDArray.getValuesAsync() which asynchronously downloads the values from the GPU #146

  • Added math.concat[1-4]D and math.slice[1-4]D #151

Bug fixes

  • Fixed bug in NDArray.randTruncatedNormal. #172 Thanks @caisq for the find!

Other improvements

  • Added highlighting when glsl fragment shaders fail to compile
    image

  • Faster reduction ops (min/max/reduce_sum/logsumexp) #145

  • Faster matmul and convolutions/pooling #129

  • Added script for model builder training data preprocessing. #136 Thanks @kaihuchen!

  • Improved benchmark measurements. #140

Thanks to all our contributors!

@Lewuathe @mnottheone @kaihuchen @SayHelloToWorld @cghawthorne @digitalheir @caisq @arthurjdam @jsphweid @bernard-lin @shancarter

v0.2.0

07 Sep 20:18
Compare
Choose a tag to compare

API additions:

  • Add broadcasting to NDArrayMath functions. e.g. math.add(array2d, scalar) works. Use math.addStrict(a, b) to statically enforce a and b to be of the same rank.
  • Add NDArrayMath.basicLSTMCell and NDArrayMath.multiRNNCell to the math layer for inference only
  • Add a TensorFlow-like API for NDArrayMath.conv2d padding, with 'SAME'|'VALID' paddings
  • Add NDArrayMath.sqrt
  • Add sgd momentumOptimizer

Other features:

  • math.enableDebugMode() now has console profiling for math functions:
    image

Internal changes:

  • Add logical sampling, with a shader compiler that splices in methods for sampling in logical space, instead of sampling on 2D physical texture space. This lets us do broadcasting, and opens up the door for batching. It is also a prerequisite for integer textures to get iOS working (this will be part of the next release).

v0.1.0 First release

04 Aug 17:45
Compare
Choose a tag to compare

We are so excited to finally open source this library. We can't wait to see what you build with it!