Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: int type #195

Closed
fdecampredon opened this issue Jul 22, 2014 · 38 comments
Closed

Suggestion: int type #195

fdecampredon opened this issue Jul 22, 2014 · 38 comments
Labels
Out of Scope This idea sits outside of the TypeScript language design constraints Suggestion An idea for TypeScript

Comments

@fdecampredon
Copy link

Introducing int type could allows some error to be caught at compile time (like trying to index an array with a float) and perhaps improve performance of outputted javascript, to obtains true int, TypeScript could systematically emit a cas with |0 when a variable/parameter is an integer :

var i: int; 
var n: number;
var a: any = 1.2;
i = 1.1; // error;
i = n; // error
i = 3 / 4; // valid, type casted
i = a; // valid, type casted

var indexable: { [i: int]: bool } = {};
indexable[n] = true; // error
indexable[i] = true; // valid
indexable[i/2] = true; // valid, type casted
indexable[a] = true; // valid, type casted

function logInt(i: int) {
  console.log(i);
}

would emit

var n;
var i = i | 0;
var a = 1.2;
i = 1.1 // error;
i = n // error
i = (3 / 4 ) | 0; // valid, type casted
i = a | 0; // valid, type casted

var indexable= {};
indexable[n] = true; // error
indexable[i] = true; // valid
indexable[(i / 2) | 0] = true; // valid, type casted
indexable[a | 0] = true; // valid, type casted

function logInt(i: int) {
  i = i | 0;
  console.log(i);
}

There will perhaps be a problem with generic and method but I guess the compiler could in this case type cast when passing the parameter:

function add<T>(a: T,b: T): T {
  return a + b;
}  

var a = add(1, 2); // a is number value 3
var b = add<int>(1/2, 2); // b is int, value 2
var c = add(1/2, 2); // c is number, value 2.5

emit :

function add(a, b) {
  return a + b;
}
var a = add(1, 2); // a is number value 3
var b = add<int>((1/2)| 0, 2);
var c = add(1/2, 2); // c is number, value 2.5

also perhaps the compiler should always infer 'number' if there is not explicit type annotation

var n = 3 //number
var i: int  = 3 //int
@RyanCavanaugh
Copy link
Member

Great start; thanks for the examples. I have some follow-up questions

Compat of literals vs divide expressions

i = 1.1; // error;
i = 3 / 4; // valid, type casted

By what rule is the first line an error, but the second line OK? I have to assume that the type of 1.1 is still number, and that the type of 3/4 is still number, so from a type system perspective there's no difference there.

var indexable: { [i: int]: bool } = {};
indexable[n] = 3; // error
indexable[i] = 3; // valid

I don't understand either of these. First, 3 is not assignable to bool, so the second assignment should definitely fail. Second, indexible[n] being an error implies that indexible[0] would also be an error (0 and n are of the same type), which is very problematic.

Treatment of optional int parameters

The proposed emit of i = i | 0; implies that undefined and null values are implicitly converted to zero. This makes optional int parameters very dangerous, because you couldn't check them for undefined. How would you deal with that?

Emit locations for | 0

Can you clarify the rules for where exactly | 0 would be emitted?

@fdecampredon
Copy link
Author

Compat of literals vs divide expressions

i = 1.1; // error;
i = 3 / 4; // valid, type casted
By what rule is the first line an error, but the second line OK? I have to assume that the type of 1.1 is still number, and that the type of 3/4 is still number, so from a type system perspective there's no difference there.

Ok I guess my example were not really clear, what I meant is :
assigning a number value to a variable explicitly typed as int should not be allowed :

var i: int;
var n: number;
i = n; //error;
i = 1.1 //error

However the division of 2 int-compatible values is contextually typed to int in place where the compiler expect anint

var i: int;
var i1: int;
i = i / i1 // valid and type casted because `i` and `i1` are `int-compatible`
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible`

var indexable: { [i: int]: bool } = {};
indexable[n] = 3; // error
indexable[i] = 3; // valid
I don't understand either of these. First, 3 is not assignable to bool, so the second assignment should definitely fail.

I made an error in this example a correct example would be :

indexable[n] = true; // error
indexable[i] = true // valid

I just wanted to notify that the same rules applied for variable assignment and indexing (and it should also apply to parameters) I'll update the example right now.

Second, indexible[n] being an error implies that indexible[0] would also be an error (0 and n are of the same type), which is very problematic.

Not really in the same way that var i: int = 0 is valid because 0 is contextually typed to int here indexible[0] is valid because 0 is contextually typed to int.
n however is explicitly typed to number and so indexible[n] is an error because number is not compatible with int.

Emit locations for | 0

Can you clarify the rules for where exactly | 0 would be emitted?

there is 4 cases where the compiler must emit |0

  • in case of uninitialized variables var i: int emit : var i = i | 0;
  • When assigning division of int-compatible values
  • When assigning value typed as any
  • Parameter specified as int should be type-casted at the beginning of the function block

Treatment of optional int parameters

The proposed emit of i = i | 0; implies that undefined and null values are implicitly converted to zero. This makes optional int parameters very dangerous, because you couldn't check them for undefined. How would you deal with that?

In fact I would tend to think that having for rule that :
int are always initialized and 0 by default
like in other typed language like ActionScript would make sense for me.
If you look at one of the example you can see that var i: int emit : var i = i | 0; so even for variable int are never undefined nor null.
However that part is completely optional, and the compiler could just type cast parameters passed to the function in the same way that it would do for assignment and indexing :

function logInt(i: int) {
  console.log(i);
}
var t: any = 3;
logInt(t);
function logInt(i) {
  console.log(i);
}
var t = 3;
logInt(t|0); // t is any so type casted

Still I think that emitting that cast at the beginning of the function could allow JIT compilers to perform some optimization. But that's perhaps another topic.

@RyanCavanaugh
Copy link
Member

Diving into the contextual typing, it sounds like you have a few new rules:

  • An indexing expression whose operand has an int indexer is contextually typed with the type int
  • The type of a numeric literal is number unless its contextual type is int and it is an IntegerLiteral, in which case its type is int

Is that right? Maybe elaborate some examples here.

Going back to this example:

var i: int;
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible`

What's the emit here? i = 3 / 4 or i = (3 / 4) | 0? Both are bad -- one introduces a floating point number into an int, and the other breaks JavaScript compatibility (this is not something we would do).

@fdecampredon
Copy link
Author

var i: int;
i = 3 / 4 // valid and type casted because 3 and 4 are int-compatible
What's the emit here? i = 3 / 4 or i = (3 / 4) | 0? Both are bad -- one introduces a floating point number into an int, and the other breaks JavaScript compatibility (this is not something we would do).

For me it seems logic than when you assign something to an int it gets automatically type casted so I would say that :

var i: int;
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible`

would be translated to :

var i;
i = (3 / 4) | 0;

However that division part of what I proposed is just a tool to avoid boilerplate of manually casting it, if that implicit cast is not something desired the compiler could just report an error.

@fdecampredon
Copy link
Author

Ok trying to summing up of the discussion I obtain the following rules :

  • there is a new primitive type int representing integer values
  • int is not compatible is always initialized to 0 and cannot be undefined, to obtain such behavior the compiler will type cast int values with |0 in place where it might become a non-int value :
var i: int; 
function addInt(a: int, b: int) {
  return a + b;
}
var a: any = 3.5;
i = a;

emit:

var i = i | 0;
function addInt(a, b) {
  a = a | 0;
  b = b | 0;
  return a + b;
}
var a = 3.5;
i = a | 0;
  • The type of a numeric literal is number unless its contextual type is int and it is an IntegerLiteral, in which case its type is int :
var n = 1; // n is number
var i: int = 1; // i is int
i = n; // error

var objWithNumber = { i: 1 }; // objWithNumber type is { i: number; };
var objWithInt: { i: int } = { i: 1 }; // objWithInt type is { i: int };
objWithInt = objWithNumber; // error

function getValue() { return 1; } // getValue type is : () => number
function getInt(): int { return 1; } // getValue type is : () => int
var n1 = getValue(); //n1 is number
var i1 = getValue(); //i1 is int
i1 = n1; //error
  • An indexing expression whose operand has an int indexer is contextually typed with the type int
var n: number;
var i : int;
var indexible: { [index: int]: any };

indexible[n]; // error
indexible[i]; // valid
indexible[1]; // valid `1` is here contextually typed to `int`
indexible[1.1]; // error
  • operators :
    • int (+, -, *) int is int
    • int / int is number, if we do not include the automatic division type casting
    • number ( |, &, >>, <<, >>>) number is int, important since |0 it will serve as manual typecast for division

@ivogabe
Copy link
Contributor

ivogabe commented Aug 4, 2014

I have an alternative proposal, which introduces also a double type. This one is more in line with asm.js, for example |0 is not allowed on a number (only integer), you should use ~~ to convert a number to an integer.

Integer & double proposal

This proposal introduces 2 new primitive types: integer and double. Both extends the number type. It was designed for 2 reasons, for compile time type checking and for run time optimalizations (like ASM.js).

Naming

I chose for integer instead of int, because bool has been renamed to boolean in TypeScript, and integer is more in line with boolean than int.

ASM.js

This proposal was designed with asm.js in mind. integer maps to asm's signed and double maps to double.

Values

A double can be everything a number can be except undefined or null, including Infinity, -Infinity and NaN. An integer can be every number without a decimal point. An integer cannot be undefined, null, Infinity, -Infinity nor NaN. number is the only primitive number type that can contain undefined and null.

When you declare a variable with the type double or integer, it will automaticly be 0.

Any number literal that contains no decimal point and does not have an negative exponent after the E (like 9E-4) is an integer. All other number literals are doubles. The following code will throw errors:

var int: integer; // Ok, int will be 0
int = 3.0; // Error: A double cannot be implicitly converted to an integer.
int = 9E-4; // Error: A double cannot be implicitly converted to an integer.
int = undefined; // Error: An integer-unde cannot be undefined.
int = 3; // Ok

var d: double; // Ok, d will be 0.
d = 3; // Error: An integer cannot be implicitly converted to a double.
d = 9E4; // Error: An integer cannot be implicitly converted to a double.
d = 3.; // Ok
d = <double> 3; // Ok

Casts

You can cast between the types number, integer and double. The value will be converted at run time (see Generated JavaScript). When converting to an integer, the number will be truncated, the numbers after the decimal point will be removed. So -1.5 will be converted to -1. A value that cannot be converted by truncating (eg undefined or NaN to an integer) will become 0.

var n: number = 3.5;
var d: double = <double> n; // You need to cast here, otherwise you will get an error 'A number cannot be implicitly converted to a double'.
var int: integer = <integer> d; // int will be 3.

n = undefined;
d = <double> undefined; // d will be 0;

Optional arguments

A optional argument typed as integer or double is allowed. When no default value is given, 0 is used as the default value (because an integer or a double cannot be undefined).

Generated JavaScript

Most expressions will be wrapped with (...)|0 (for integers) and +(...) (for doubles).

Function arguments

Arguments will be reassigned according to the ASM.js spec.

TypeScript:

function foo(a: integer, b: double, c: integer = 3, d: double = 3., e?: integer, f?: double) {
}

JavaScript:

function foo(a, b, c, d, e, f) {
    a = a|0;
    b = +b;
    c = (c === void 0 ? 3 : c)|0;
    d = +(d === void 0 ? 3. : d);
    e = e|0; // undefined | 0 => 0
    f = +f; // +undefined => 0
}

Adding 0 explicitly as the default value should generate the same code as adding no default value.

Function return

The expression of the return statement should be wrapped:

JavaScript:

function foo() {
    return ("complex calculation".length)|0; // Returns integer
}
function bar() {
    return +("complex calculation".length / 3); // Returns double
}

Assigning to a variable

When declaring a variable (integer or double) without assigning a value, it gets the value 0 (since an integer or double cannot be undefined).
When assigning a something to a variable whose type is integer or double, the expression should be wrapped:

TypeScript:

var a: integer = "complex calculation".length;
a = a * 2;
a *= a;

var b: integer; // b will be 0.

JavaScript:

var a = ("complex calculation".length)|0;
a = (a * 2)|0;
a *= a|0;

var b = (0)|0; // b will be 0.

Typecasting

A cast to double is wrapped with +(...) and a cast to integer is wrapped with ~~(...), because the asm.js spec does not allow a double on the | operator.

TypeScript

var n: number = 4.5;
var d: double = <double> n;
var int: integer = <integer> n;

JavaScript

var n = 4.5;
var d = +(n);
var int = ~~(n);

A cast from anyting to number, from integer to integer or from double to double will emit not extra JavaScript.

Operators

Unary

  • +number => double
  • -number => double (since -undefined = NaN and -null = 0)
  • -integer => integer
  • ~integer => integer
  • !number => boolean

Binary

  • integer +, - or * integer => integer
  • number +, - or * number => double (since undefined + undefined = NaN)
  • number / number => double (since undefined / undefined = NaN)
  • integer |, &, ^, <<, >> or >>> integer => integer
  • number <, <=, >, >=, ==, !=, === or !== number => boolean
    Note that you cannot apply the /= assignment operator on an integer.

Generated errors

An error is thrown when:

  • an integer or double is expected but something else is given (eg number or undefined).
  • you apply an illegal assignment operator on a number (eg /= on an integer or |= on a number).

@danquirk
Copy link
Member

danquirk commented Aug 4, 2014

How does this existing TypeScript code behave? Is it now an error when integer is inferred for x?

var x = 1;
x = 1.5;

It's very strange to add runtime semantics to certain cast operations.

@ivogabe
Copy link
Contributor

ivogabe commented Aug 5, 2014

Indeed, there needs to be a rule that a non-typed variable declaration won't get the type integer or double, but always number. If you want a variable to be an integer or double, you'll need to specify that explicitly.

I chose for runtime semantics to cast operations because of various reasons. For performance, js engines know better how much space they need to allocate for a number and they know which overload of the + operator is used. Integer calculations are most times faster than floating point ones.

There also needs to be a way to convert the different number types between each other. If you already generate JavaScript for cast operations, why not use a cast to convert a number type? Also this doesn't introduce a new syntax.

An alternative would be to write ~~ or + to convert numbers, but in my opinion the cast look better:

var d: double = 3.5;
var int: integer = <integer> d;
d = <double> int;

// Or

var d: double = 3.5;
var int: integer = ~~d; // Because |0 isn't supported on doubles according to the asm.js spec.
d = +d;

@mbebenita
Copy link

How do you propose to deal with constants in an expression, for instance what does 1 + 2 compile to. Is it (1 + 2) | 0 or is it left as is. If the constants are integers, then you'll need to insert the coercion, which will break existing TypeScript code. If you assume that they are numbers then the expression x + 1 will always be of type number.

It feels to me like you need to buy into the asm.js type system explicitly, so either do something like <integer> 1 + <integer> 2, or come up with a new syntax for declaring typed constants. Otherwise existing TS/JS code will break.

@ivogabe
Copy link
Contributor

ivogabe commented Aug 6, 2014

1 + 2 is integer + integer, so it becomes integer, and it will be wrapped with (...)|0 in certain situations. number + integer falls back to number + number (since integer extends number) so this will return a double, and double also extends integer so it will be backwards compatible. Example:

Compare it to this OOP example:

interface Base { // number
  base: string;
}
interface Foo extends Base { // double
  foo: string;
}
interface Bar extends Base { // integer
  bar: string;
}

function add(first: Base, second: Base): Foo;
function add(first: Bar, second: Bar): Bar;
// ... implementation of add ...
var base: Base, bar: Bar;
base = add(base, bar); // first signature, returns Foo, which extends Base.

When you call add in this example with a Base and a Bar, it'll return a Foo. The + operator is overloaded the same way in my proposal.

number + number => double because:

undefined + undefined = NaN;
null + null = 0;
null + undefined = NaN;
undefined + 2 = NaN;
null + 2 = 2;

@electricessence
Copy link

This is awesome. PLEASE make ints and doubles!
Obviously a simple example of this is array indexes should always take ints.

@fdecampredon
Copy link
Author

If union types are adopted a cool way to make this suggestion simpler to implements from type system point of view would be to infer int | number for IntegerLiteral this way :

var x = 1;
x = 1.5;

would still be valid and same would be for :

var t: { [i: int]: boolean } = {};
t[1]

without contextual typing

@Griffork
Copy link

I honestly believe that for both ints and doubles, null and undefined should become (at least) NaN, if not retaining their original value.
Otherwise you're introducing two new primitives that behave differently to every other type in javascript.

@metaweta
Copy link

One major benefit of an int type is the ability to optimize arithmetic operations (particularly bitwise ops) to do multiple calculations without converting back and forth between floating point between each one. To do this, I'm pretty sure int has to be non-nullable.

@aholmes
Copy link

aholmes commented Jun 27, 2015

Another point for integer and float types are for writing definition files. When APIs call for specifically an int or a float, it is dishonest to claim that the type is a number.

@dead-claudia
Copy link

And there are certain operations that only return 32-bit integers:

  • Math.fround
  • Math.trunc
  • Math.imul (requires both arguments to be 32-bit integers)
  • Math.clz32
  • Math.floor
  • Math.sign
  • x & y
  • x | y
  • x ^ y
  • ~x
  • The length property of nearly every builtin (every one pre-ES6)
  • Any enum without numbers assigned
  • All the Date instance methods that return numbers except valueOfand getTime
  • Any method on any SIMD integer type that returns a number (except for SIMD.int64x2)
  • DataView.getInt*
  • Every entry in the integer typed arrays.

Others only produce 32-bit integers under certain circumstances:

  • Math.abs(int)
  • Math.max(int...)
  • Math.min(int...)
  • +boolean
  • -boolean
  • -int
  • +int

And many of the core language APIs accept only integers (and coerce the ones that aren't):

  • All the Date instance setters that accept numbers except setTime
  • The entry setters on typed integer arrays
  • Any method that requires a number type on SIMD integer types.
  • Any lane-related argument in SIMD methods

As for numbers within the range 0 ≤ |x| < 253 ("safe" integers), add the following:

Always produce safe integers:

  • length instance property on every built-in
  • get ArrayBuffer.prototype.byteLength
  • Every Date class and instance method that returns a number
  • get RegExp.prototype.lastIndex
  • {Array,String}.prototype.indexOf
  • {Array,String}.prototype.lastIndexOf
  • The numeric properties of RegExp.prototype.exec results
  • The numeric instance properties of typed arrays.

Require safe integer arguments:

  • Every Date class and instance method's numeric arguments
  • Indices on every Array, TypedArray, and array-like object
  • Array's, ArrayBuffer's, and each TypedArray's length argument
  • ArrayBuffer.prototype.slice's arguments

@ivogabe
Copy link
Contributor

ivogabe commented Sep 5, 2015

I've written a proposal in #4639. Big difference is that the emit is not based on type info (which means you can use it with isolatedModules and it fits better in the design goals of TS). It also introduces not only an int type, but also uint and fixed size integers (like int8, uint16 but also other sizes like int3 and uint45). Let me know what you think!

@rjmunro
Copy link

rjmunro commented Oct 19, 2017

I'd vote for int / int = number, not int / int = int and then have to emit (x / y) | 0 or something. I would follow python3 here not python2.

@Thaina
Copy link

Thaina commented Jan 26, 2018

Do we have function roundToZero? I mean 5/4 return 1 and -5/4 return -1

Also I think the point is int / int could return number. But if it cast from number to int it should be rounded to zero

var x = 5 / 4; // number 1.25
var y : int = 5 / 4; // int 1
var z = 5; // int 5
var w = x / 4; // number 1.25

@dead-claudia
Copy link

@Thaina No, but the closest you could get to that would be either (5/4)|0 or Math.trunc(5/4).

@styfle
Copy link
Contributor

styfle commented Jan 26, 2018

If you are interested in integers, follow the BigInt proposal which is already Stage 3 and could solve most of your integer use cases.

@dead-claudia
Copy link

@styfle That'll require a separate type from this, because they would require infinite precision, and most DOM APIs would reject them until the WebIDL spec gets updated to accept BigInts where integers are expected. In addition, such integers can't be used with numbers, as the implicit ToNumber coercion would throw for them. (This is similar to how ToString throws for symbols.)

@tarcieri
Copy link

tarcieri commented Mar 7, 2018

@isiahmeadows if you take a look at the writeup I did in this issue: #15096

...the BigInt proposal is useful for both fixed-width/wrapping and arbitrary precision types.

For example, you could map the following fixed-width types as follows using the given BigInt constructors:

  • int32: BigInt.asIntN(32, BigInt)
  • uint32: BigInt.asUintN(32, BigInt)
  • int64: BigInt.asIntN(64, BigInt)
  • uint64: BigInt.asUintN(64, BigInt)

My understanding is that these particular constructors are supposed to hint to the VM to use the correspondingly sized CPU architecture native integer types, but even if they don't, they should be semantically equivalent to the correspondingly sized wrapping integer types.

Also, the V8 team just announced Intent to Ship for TC39 BigInts 🎉

https://groups.google.com/forum/#!msg/v8-dev/x571Gr0khNo/y8Jk0_vSBAAJ

@dead-claudia
Copy link

@tarcieri I'm familiar with that proposal. (I've also got a low need for BigInt, but that's a different deal.)

I'm still interested in a glorified int32 <: number for other reasons, since 32-bit machines still exist, and WebAssembly requires them for most integers (notably pointers). BigInts don't interest me as much, since I rarely deal with data that large in practice.

As for those constructors, I could see frequent use of BigInt.as{Int,Uint}N(64, BigInt) for some cases, but for 32-bit arithmetic, engines also have to validate the callee is static. It's also a bit more verbose than I'd like, although I could live with it.

@tarcieri
Copy link

tarcieri commented Mar 8, 2018

I think there's a pretty natural mapping of those constructors to sized integer types, e.g.

let x: uint64 = 42;

with BigInt would compile down to:

let x = BigInt.asUintN(64, 42n);

I don't think it makes any sense to build any sort of integer type on number. JavaScript finally has native integers, and ones which will raise runtime exceptions if you attempt to perform arithmetic on a mixture of BigInts and numbers. A great way to avoid those runtime exceptions is static type checking, so I think having TypeScript assert that would be greatly helpful.

@ghost
Copy link

ghost commented Mar 8, 2018

@tarcieri I like the new types but I disagree with the sugar. The value assigned to x shouldn't change based on the type of x. Make it a type error and let the user fix it.

@tarcieri
Copy link

tarcieri commented Mar 8, 2018

@errorx666 TypeScript could use the same literal syntax as JavaScript, but that's unnecessary when the compiler already has the type information

This is no different from almost every other statically typed language on earth, where you are free to write something to the effect of let x: double = 0, even though 0 is using the same literal syntax as integers

@dead-claudia
Copy link

@tarcieri I've been cautious on suggesting any of that, since that kind of thing has been repeatedly shot down by the TS team. (They seem to prefer sugar to be syntactic, not type-based.)

@ghost
Copy link

ghost commented Mar 8, 2018

@tarcieri Suppose you have the type definition for x in a .d.ts file. Suppose that .d.ts file is in third-party code. Suppose therefore, then, that the type definition may change, without your knowledge or intent.

Should your compiled emit fundamentally change, without error or warning?

Suppose further that the change introduced some sort of bug. How much of a nightmare would it be to ultimately trace that bug down to a change in a .d.ts file in an npm @types package?

@styfle
Copy link
Contributor

styfle commented Mar 8, 2018

@errorx666 That scenario can already happen with the following code:

status.d.ts

declare module server {
	const enum Status {
		None,
		Pending,
		Approved,
	}
}

main.ts

console.log(server.Status);

main.js

console.log(2);

@ghost
Copy link

ghost commented Mar 8, 2018

@styfle Fair point, but at least both possible emits evaluate to 2. - Or, more importantly, the same type (Number).

@tarcieri
Copy link

tarcieri commented Mar 8, 2018

I've been cautious on suggesting any of that, since that kind of thing has been repeatedly shot down by the TS team.

This may be too much syntax sugar to swallow, but the benefits outweigh the drawbacks, IMO. There is an opportunity here for TypeScript to model sized, typed integers in a way JavaScript VMs can understand, and also statically assert that programs are free of integer/number type confusion.

Should your compiled emit fundamentally change, without error or warning?

So, again, for context: we're discussing integer literals.

Every statically typed language I can think of, even where they do support type suffixes/tags, will interpret untagged literals according to the type they're being bound to. So to answer your question: yes, for untagged integer literals, there shouldn't be an error or warning even though the type changed.

If you're worried about type confusion there, yes the tagged syntax should be supported to, and that should fail if a type changes from a BigInt to a number.

@tarcieri
Copy link

tarcieri commented Jun 8, 2018

Looks like #15096 is on the TypeScript 3.0 roadmap: https://github.com/Microsoft/TypeScript/wiki/Roadmap#30-july-2018

@RyanCavanaugh RyanCavanaugh added Out of Scope This idea sits outside of the TypeScript language design constraints and removed Needs Proposal This issue needs a plan that clarifies the finer details of how it could be implemented. labels Jun 8, 2018
@RyanCavanaugh
Copy link
Member

We're still holding the line on type-directed emit. BigInt seems like a "close enough" fit for these use cases and doesn't require us inventing new expression-level syntax.

@ivogabe
Copy link
Contributor

ivogabe commented Jun 8, 2018

Agreed, adding other syntax or types for "double based integers" would only be confusing.

@trusktr
Copy link
Contributor

trusktr commented Jul 2, 2018

Make it explicit:

let i:int = 7
i = 1.1; // error, is a number
i = 3 / 4; // error, produces a number
i = Math.floor(3 / 4); // valid, might require a typedef update
i = (3 / 4) | 0; // valid

and don't do the automatic compile from 3 / 4 to (3 / 4) | 0 so we don't break expectations of JS. Just let the type system be purely static checks for convenience. It can also be smart. F.e., 3 * 4 remains an int, 3 * 4.123 does not, and perhaps 4 * 4.25 does produce an int (as subset of number).

And same for floats.

@trusktr
Copy link
Contributor

trusktr commented Jul 2, 2018

"double based integers" would only be confusing.

Just int, nothing more. We're in JS, not C. :)

@tenshikaito
Copy link

I want to know when we can use the int declaration symbol

@microsoft microsoft locked as resolved and limited conversation to collaborators Jan 30, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Out of Scope This idea sits outside of the TypeScript language design constraints Suggestion An idea for TypeScript
Projects
None yet
Development

No branches or pull requests