Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Added initial architecture decision records #1

Merged
merged 8 commits into from
Jun 7, 2024

Conversation

robdmoore
Copy link
Contributor

No description provided.

Copy link
Contributor

@joe-p joe-p left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I think semantic compatibility is too narrow of a requirement for the "frontend" of the language. I understand it was important for Python, but ECMAScript has a different set of restrictions, notably no operator overloading, and this was not a requirement for the current version of TEALScript thus it could introduce some major breaking changes.

Also, the more I have thought about it I have been getting concerned about the engineering resources required to make and maintain "in-language" block evaluation for testing and debugging. We saw the effects of having two separate codebases for dry run and block evaluation and introducing multiple code bases for block evaluation in each of the languages we support seems like a gargantuan amount of code that we need to maintain. As the Algorand grows and gets more features, this only becomes more and more difficult.

If the goal is to not have full block evaluation then I am concerned about tooling fragmentation. Having a tool that you can use "sometimes" is not great as we saw with tealdbg.

In my opinion we have all the pieces we need to provide a full visual debugging experience without an in-language evaluator. We have /simultate in algod and source mapping. Would it be as fast? No of course not, but delivering a complete experience (that is easy to maintain) seems more important than a fast one, at least near term.

@robdmoore
Copy link
Contributor Author

robdmoore commented May 27, 2024

In general, I think semantic compatibility is too narrow of a requirement for the "frontend" of the language. I understand it was important for Python, but ECMAScript has a different set of restrictions, notably no operator overloading, and this was not a requirement for the current version of TEALScript thus it could introduce some major breaking changes.

Python could have been implemented without semantic or syntactic compatibility as a requirement, it was a deliberate design decision as articulated in the docs.

Key reasons for it includes:

  • Principle of least surprise and meet devs where they are at - the code should do exactly what you would expect if you understand the underlying language and the standard tooling should work. Also, the language should be (a subset of) the existing language, not a new language to learn.
  • Marketing - the above also helps differentiate the power of what we are doing from other industry attempts, we've put the effort in with the puya compiler architecture to allow you to translate to an AST (AWST) lets you take a real, high level language and have a multi-step optimising compiler take that and turn it into AVM bytecode (or TEAL templates). That's super powerful.
  • Automated testing - addressed below, but semantic compatibility enables this powerful ability.

It's fair to say that it's more work to do this, but we believe the benefits are worth it and it's a core strategy set by @Loedn and @johnalanwoods. It's also very fair to say that TypeScript doesn't make our life easy to do this given the lack of operator overloading and implicit type casting (noting Python also doesn't have implicit type casting, but does have operator overloading). Naturally, there will be some compromises that will need to be made to implement TypeScript so these architecture decision records lay out the first of these design decisions and will be followed by the others.

To your point, TEALScript was designed with different goals in mind so while we want Algorand TypeScript to closely resemble TEALScript, I think it's inevitable there would be breaking changes. However, per this pull request, the principle we want to take into this design exercise is that we minimise those changes and ensure there is a clear migration path.

Also, the more I have thought about it I have been getting concerned about the engineering resources required to make and maintain "in-language" block evaluation for testing and debugging. We saw the effects of having two separate codebases for dry run and block evaluation and introducing multiple code bases for block evaluation in each of the languages we support seems like a gargantuan amount of code that we need to maintain. As the Algorand grows and gets more features, this only becomes more and more difficult.

I've spent a significant portion of my engineering career thinking about, writing, experimenting with, teaching and guiding teams about, blogging about and speaking at conferences about automated testing. We are leaps and bounds in a better space now than when I first started building against Algorand, but there is a huge leap forward that we can make yet that I know will make the development experience of Algorand smart contracts.

Given that we have a seam via the stub types that we program against that define the Algorand Python et. al. languages, we have a set of types we can provide an in-memory, in-process automated testing experience around. This type of testing allows us to have an order of magnitude faster speed of test, without dropping too much confidence. I gave a conference talk 10 years ago that talks about speed vs confidence. This type of testing we are proposing is somewhat akin to subcutaneous testing. What this will allow us to do is do things like combinatorial testing, property-based testing and even model-based testing without having to have minutes or even hours long tests (a really bad anti-pattern), which are all powerful testing techniques when you want a high degree of confidence in your logic (important for smart contracts). It's worth pointing out that that isn't meant to fully replace the current end-to-end testing that is done by executing on AVM from tests. A good testing strategy uses a mix of techniques that covers a range of speed vs confidence and risk-based scenarios, this just adds another tool in the toolbelt.

It's worth pointing out that even though we've barely talked about this ability to do automated tests in-process, we've already seen people start to realise how powerful that would be, e.g. some of the blog posts from alexander_codes.

I suspect a killer feature is going to be the fact that you can do breakpoint debugging against the Python code with millisecond startup times. Soon, we will be aiming to have the same for replaying simulate transactions too of course with the AVM debugger functionality that was developed last year, but that still doesn't match up to the streamlined experience that this will give you.

If the goal is to not have full block evaluation then I am concerned about tooling fragmentation. Having a tool that you can use "sometimes" is not great as we saw with tealdbg.

We aren't trying to completely emulate the AVM, that would be extreme and to your point, an amount of effort not commensurate with the payoff. As mentioned above, we have a seam that we can exploit that's in the native language we are writing tests in. We can do a combination of implement the AVM functionality, e.g. for op codes (e.g.: https://github.com/algorandfoundation/puya/blob/feat/unit-testing-simpleops/algopy_testing/src/algopy/op.py#L62) along with automated tests that ensure our implementation matches (and continues to match) AVM (e.g.: https://github.com/algorandfoundation/puya/blob/feat/unit-testing-simpleops/algopy_testing/tests/test_op.py#L154-L170), and then provide a context as a "test double" for allowing you to declaratively specify ledger state and assert if it was changed in ways you expect or certain inner transactions were issued that you expected without us having to implement the ledger itself.

Naturally, there's a bit of setup initially, but AVM rarely has breaking changes, so new stuff will be largely additive and the compiler needs to keep up with new stuff regardless so the extra effort should be relatively small on top of what's needed anyway. This is the official languages and compiler so it's where effort needs to be spent to keep it in date regardless. Also, there is always the ability to allow someone to simply mock underlying methods using standard mocking tools (like pymock) so if there is a lag someone can patch over it in the meantime of waiting.

I don't think tealdbg or dryrun are a fair comparison - they weren't kept up to date when they should have been. In our case, the test suite won't pass if our implementations differ so we'd be forced to keep things in date, let alone the fact that, again, we are building a compiler and frontends that already need to be kept in date anyway.

Finally, this strategy of tackling automated testing has been set by @Loedn and @johnalanwoods as a design goal.

In my opinion we have all the pieces we need to provide a full visual debugging experience without an in-language evaluator. We have /simulate in algod and source mapping. Would it be as fast? No of course not, but delivering a complete experience (that is easy to maintain) seems more important than a fast one, at least near term.

We need this, and as I said above are working on it too :) I think I covered the rationale above though.

@joe-p
Copy link
Contributor

joe-p commented May 28, 2024

Yeah I totally see the value and think it's very compelling. Potential for debugging via Python interpreter was something developers liked hearing about at Pycon as well. That being said, /simulate is still missing some critical pieces to provide a everything developers want and obviously the in-language will be missing things that /simulate can do. I'm just concerned about ending up in a place where we don't have a tool that can do everything a developer wants, but instead have multiple that tools that still leaves gaps.

If this the direction we want to go so be it, just wanted to raise concerns before significant amount of resources have been invested.

Also tomorrow I'll be PRing this branch to add some clarity on semantic compatibility. Based on the conversations we've had it's to me that TEALScript already meets semantic compatibility in the way we want, so perhaps what might be a good exercise is identifying the problems with TEALScript first rather than starting completely from scratch.

@robdmoore
Copy link
Contributor Author

robdmoore commented May 28, 2024

TEALScript already meets semantic compatibility in the way we want

I don't think it's wildly different, but there are definitely a few areas that I believe need to be looked at. That's the purpose of the architecture decision records (these two and the ones that will follow) - to explore each case one by one.

Copy link
Contributor

@joe-p joe-p left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made some more suggestions based on the conversations we've had on slack and in meetings.

I've also added a new principle for ABI abstraction, which has been a guiding principle for TEALScript that developers really value.

docs/README.md Show resolved Hide resolved
## Algorand Python

[Algorand Python](https://algorandfoundation.github.io/puya/) is the Python equivalent of Algorand TypeScript. Whilst there is a primary goal to produce an API which makes sense in the TypeScript ecosystem, a secondary goal is to minimise the disparity between the two APIs such that users who choose to, or are required to develop on both platforms are not facing a completely unfamiliar API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## ABI Abstraction
When possible, Algorand TypeScript should avoid putting the cognitive overhead of ABI encoding/decoding on the developer. For example, there should be no different between AVM byteslices and ABI encoded strings and they should be directly comparable and compatible until the point of encoding (returning, putting in state, array encoding, logging, etc.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a slightly more nuanced view of this.

There should be default typing options that meet this principle. Like how in Algorand Python if you use a UInt64 or String then Puya takes care of encoding/decoding for you.

But, I think it's important to also have control over that stuff when you want/need it, or there isn't a direct translation between AVM primitives and the ABI type. The way this was achieved in Algorand Python was to have primitive types (e.g. UInt64, String) that automatically get decoded and encoded on their way in/out and represented with the equivalent ABI type in ARC-32/4, but then to expose specific ABI encoded types in a separate (arc4) namespace, so when you want to work with the encoded data you can do that too (and those types all have a .native property to easily decode to the relevant underlying AVM type. Reference: https://algorandfoundation.github.io/puya/lg-types.html

Regardless, this is not relevant to these ADRs because they are talking about AVM primitive types. We plan on having a separate ADR to discuss how ABI types are handled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the scenarios where a developer wants to perform operations on the encoded bytes? Seems to be very rare. For edge cases TEALScript provides a rawBytes function that will give you the encoded bytes of the value.

Regardless, this is not relevant to these ADRs because they are talking about AVM primitive types. We plan on having a separate ADR to discuss how ABI types are handled.

Ah gotcha. I do think they are a bit linked though because compatibility between native types and ABI types is important. It's something develoeprs have tripped up on with Beaker and Algorand Python

- Less implicit type safety as TypeScript will infer the type of any binary math expression to be the base numeric type (`number`). A type annotation will be required where ever an identifier is declared and additional type checking will be required by the compiler to catch instances of assigning one numeric type to the other.
- In order to have 'run on Node.js' semantics of a `uint64` or `biguint` match 'run on the AVM', a transpiler will be required to wrap numeric operations in logic that checks for over and under flows.

TEALScript uses a similar approach to this, but uses `number` as the underlying type rather than `bigint`, which has the aforementioned downside of not being able to safely represent a 64-bit unsigned integer, but does has the advantage of being directly compatible with raw numbers (e.g. `3` rather than `3n`) and JavaScript prototype methods that return `number` like `array.length`. The requirement for semantic compatibility dictates that we need to use `bigint` rather than `number` since it's the correct type to represent the data and we will be able to create wrapper classes for things like arrays that have more explicitly typed methods for things like length.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TEALScript uses a similar approach to this, but uses `number` as the underlying type rather than `bigint`, which has the aforementioned downside of not being able to safely represent a 64-bit unsigned integer, but does has the advantage of being directly compatible with raw numbers (e.g. `3` rather than `3n`) and JavaScript prototype methods that return `number` like `array.length`. The requirement for semantic compatibility dictates that we need to use `bigint` rather than `number` since it's the correct type to represent the data and we will be able to create wrapper classes for things like arrays that have more explicitly typed methods for things like length.
TEALScript uses a similar approach to this, but uses `number` as the underlying type rather than `bigint`, which has the aforementioned downside of not being able to safely represent a 64-bit unsigned integer, but does has the advantage of being directly compatible with raw numbers (e.g. `3` rather than `3n`) and JavaScript prototype methods that return `number` like `array.length`.

This is not true. TypeScript does not do any checking on number sizing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does in number literals actually. You can't have a 64-bit literal constant as a number without losing precision - TypeScript type system does know the difference between a bigint and a number.

number as a base type would need the customer transpiler to change it to a bigint, which is a much more complex transpilation, but may be possible. Regardless, I'll note this as a separate option so it can be discussed per my other comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't have a 64-bit literal constant as a number without losing precision

What are the implications of this? The only scenario I can think of is not having a TypeScript error on direct comparison, but I'm not sure why someone would be comparing two constants to begin with.

TypeScript type system does know the difference between a bigint and a number.

Right it does know the different between bigint and number but we can always just use the transformer to convert it to BigInt.

number as a base type would need the customer transpiler to change it to a bigint, which is a much more complex transpilation, but may be possible.

TEALScript supports large numbers constants this way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TEALScript supports large numbers constants this way.

Cool - checking: it doesn't lose precision then?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not during compilation. When TEALScript comes across a number it convert it to a bigint. To be fair though, some linters will complain about it depending on the configuration.

https://github.com/algorandfoundation/TEALScript/blob/dev/tests/contracts/math.algo.ts#L45

@joe-p
Copy link
Contributor

joe-p commented May 30, 2024

I've talked about this with @Loedn but wanted to note it here as well for visibility. It strikes me as odd that the options in both ADRs omit the way these are currently handled in TEALScript, especially since one of the guiding principles is "TEALScript compatibility"

@robdmoore
Copy link
Contributor Author

It strikes me as odd that the options in both ADRs omit the way these are currently handled in TEALScript, especially since one of the guiding principles is "TEALScript compatibility"

Both ADRs do have the TEALScript option - option 1 in bytes (noting that sure it doesn't mention branded types, but the same pros and cons apply) and option 3 in ints.

It's a fair comment that the specific TEALScript implementation isn't as clearly articulated as it could be, that's not deliberate though. I'm tweaking them to make it clearer :)

@joe-p
Copy link
Contributor

joe-p commented May 31, 2024

Both ADRs do have the TEALScript option - option 1 in bytes (noting that sure it doesn't mention branded types, but the same pros and cons apply) and option 3 in ints.

Yeah I felt like using string for both ASCII strings and byteslices (rather than Uint8Array) make it different enough to be a separate option but have no problem if we just want to tweak that option

For option 3 I think bigint vs number makes a significant difference because they are used rather differently in EcmaScript and it has implications on compatibility with other prototype methods

@robdmoore
Copy link
Contributor Author

Using string for both ASCII strings and byteslices (rather than Uint8Array) make it different enough to be a separate option

Agreed, it's a fair point


EcmaScript and TypeScript both do not support operator overloading, despite some [previous](https://github.com/tc39/notes/blob/main/meetings/2023-11/november-28.md#withdrawing-operator-overloading) [attempts](https://github.com/microsoft/TypeScript/issues/2319) to do so.

TealScript [makes use of branded `number` types](https://tealscript.netlify.app/guides/supported-types/numbers/) for all bit sizes from 8 => 512, although it doesn't allow `number` variables, you must specify the actual type you want (e.g. `uint64`). Since the source code is never executed, the safe limits of the `number` type are not a concern. Compiled code does not perform overflow checks on calculations until a return value is being encoded meaning a uint<8> is effectively a uint<64> until it's returned.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TealScript [makes use of branded `number` types](https://tealscript.netlify.app/guides/supported-types/numbers/) for all bit sizes from 8 => 512, although it doesn't allow `number` variables, you must specify the actual type you want (e.g. `uint64`). Since the source code is never executed, the safe limits of the `number` type are not a concern. Compiled code does not perform overflow checks on calculations until a return value is being encoded meaning a uint<8> is effectively a uint<64> until it's returned.
TealScript [makes use of branded `number` types](https://tealscript.netlify.app/guides/supported-types/numbers/) for all bit sizes from 8 => 512, although it doesn't allow `number` variables, you must specify the actual type you want (e.g. `uint64`). Since the source code is never executed, the semantics of `number`'s math operators (overflows, and fractional results from a division) are not a concern, however literal values above `Number.MAX_SAFE_INTEGER` will lose precision when being parsed into typescript AST. Compiled code does not perform overflow checks on calculations until a return value is being encoded meaning a uint<8> is effectively a uint<64> until it's returned.

I was surprised to see that even though the node for a NumericLiteral stores the value as a string - it seems that the parser shows the rounded value instead of the raw literal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"when being parsed into typescript AST" is a bit unclear to me. The node itself has text for the literal, so you can always get the node text and parse it as BigInt in transformer.

For example, this function from TEALScript to get the text for a numeric literal type

  private getNumericLiteralValueString(node: ts.Node): string {
    let value = '';
    if (node.isKind(ts.SyntaxKind.Identifier)) {
      const symbol = node.getSymbol()?.getAliasedSymbol() || node.getSymbolOrThrow();

      symbol.getDeclarations().forEach((d) => {
        if (d.isKind(ts.SyntaxKind.VariableDeclaration)) {
          value = this.getNumericLiteralValueString(d.getInitializerOrThrow());
        }
      });
    } else return node.getText().replace(/_/g, '');

    if (value === '') throw Error();
    return value;
  }

joe-p and others added 2 commits May 31, 2024 15:57
Co-authored-by: Neil Campbell <neil@codecise.com>
…ude additional notes about semantic compatability when working with the native EcmaScript string
Whilst binary data is often a representation of a utf-8 string, it is not always - so direct use of the string type is not a natural fit. It doesn't allow us to represent alternative encodings (b16/b64) and the existing api surface is very 'string' centric. Much of the api would also be expensive to implement on the AVM leading to a bunch of 'dead' methods hanging off the type (or a significant amount of work implementing all the methods).
Whilst binary data is often a representation of a utf-8 string, it is not always - so direct use of the string type is not a natural fit. It doesn't allow us to represent alternative encodings (b16/b64) and the existing api surface is very 'string' centric. Much of the api would also be expensive to implement on the AVM leading to a bunch of 'dead' methods hanging off the type (or a significant amount of work implementing all the methods). The signatures of these methods also use `number` which is [not a semantically relevant type](./2024-05-21_primitive-integer-types.md).

Achieving semantic compatability with EcmaScript's `String` type would also be very expensive as it uses utf-16 encoding underneath whilst an ABI string is utf-8 encoded. A significant number of ops (and program size) would be required to convert between the two. If we were to ignore this and use utf-8 at runtime, apis such as `.length` would return different results. For example `"😄".length` in ES returns `2` whilst utf-8 encoding would yield `1` codepoint or `4` bytes, similarly indexing and slicing would yield different results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I thought we were after semantic compatibility with TypeScript, not EcmaScript?

Copy link
Contributor

@tristanmenzel tristanmenzel Jun 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TypeScript being a superset of EcmaScript takes its string semantics from EcmaScript so it made more sense to me to reference the latter. This will probably apply in most cases. Maybe it's not valuable to distinguish between the two and we should just use 'TypeScript' everywhere.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the distinction is semantic compatibility with the runtime vs semantic compatibility with the type system. We know we'll have a transformer to adjust runtime behavior, thus is still seems reasonable to me to use string to represent utf-8 strings (much like how we can use number to represent uint64).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semantic compatibility means given some typescript code - it either runs the same in our execution environment (test execution and AVM execution), or doesn't run at all (if we don't support a given syntax).

AlgoTS will need to introduce several types that don't exist in any existing javascript execution environment so there is no expectation that you will be able to lift AlgoTS code and run it in a browser but the syntax and semantics should be the same.

For the reasons listed above I think it is impractical to have a string type on the avm behave the same as the typescript/ecmascript string so we would be better off introducing a new type that doesn't carry any semantic expectations.

As another example, given this function

function test(x: string) {
   return x[0]
}

When you pass in the string "a", you would get "a" back in any javascript execution environment - and the same in TealScript; however if you pass in the string "ā", you would get "ā" back in any javascript execution environment - but you'll get 0xC4 in TealScript (an invalid utf8 string)


With regards to number we deliberately don't expose the type as number and force implicit and explicit casts when dealing with literals and operations. In other words we pretend to introduce a new type uint64 without semantic expectations but are forced (by limitations of typescript/ecmascript) to depend on number underneath.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you pass in the string "a", you would get "a" back in any javascript execution environment - and the same in TealScript; however if you pass in the string "ā", you would get "ā" back in any javascript execution environment - but you'll get 0xC4 in TealScript (an invalid utf8 string)

Ah ok thanks for the example. That makes sense.

With regards to number we deliberately don't expose the type as number and force implicit and explicit casts when dealing with literals and operations. In other words we pretend to introduce a new type uint64 without semantic expectations but are forced (by limitations of typescript/ecmascript) to depend on number underneath.

Perhaps a viable option here is also a branded string with the upside being string literal support

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A branded string is definitely an option - the major con is the crowded prototype which we can (probably??) clean up with a plugin, but that's additional complication (and development effort). It feels like a lot to pay to get plain literals "hello" over tagged literals Str`hello`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah true, branded strings still would't address the incorrect intuition of str[x] accessing byte x rather than character x.

@@ -48,7 +48,9 @@ const b2 = new Uint8Array([1, 2, 3, 4])
const b3 = b1 + b1
```

Whilst binary data is often a representation of a utf-8 string, it is not always - so direct use of the string type is not a natural fit. It doesn't allow us to represent alternative encodings (b16/b64) and the existing api surface is very 'string' centric. Much of the api would also be expensive to implement on the AVM leading to a bunch of 'dead' methods hanging off the type (or a significant amount of work implementing all the methods).
Whilst binary data is often a representation of a utf-8 string, it is not always - so direct use of the string type is not a natural fit. It doesn't allow us to represent alternative encodings (b16/b64) and the existing api surface is very 'string' centric. Much of the api would also be expensive to implement on the AVM leading to a bunch of 'dead' methods hanging off the type (or a significant amount of work implementing all the methods). The signatures of these methods also use `number` which is [not a semantically relevant type](./2024-05-21_primitive-integer-types.md).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main problem with having non-implemented functions is them still showing IDE, but we can solve this with plugins

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also name clashes. e.g. if we want a length method that returns uint64 then you can't do it with extending string prototype.

const b = Bytes.fromHex("ABFF")
const c = Bytes.fromBase64("...")
const d = Bytes.fromInts(255, 123, 28, 20)
const e = Bytes`${a} World!`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor detail, but I would propose we take inspiration from Python and use b

b`${a} World!`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did explore this but found (at least in contrived example code) the likelihood of a name clash with a local variable to be quite high. eg. const [a, b, c] = someArray. This resulted in an unfriendly error. We could use B if we prefer to be terse at the expense of clarity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right yeah you did mention that. I don't feel strongly either way so we should just see what existing developers would prefer.

@Loedn
Copy link
Member

Loedn commented Jun 6, 2024

Option 4 for bytes, Option 5 for ints - @Loedn @ehanoc @johnalanwoods

@joe-p
Copy link
Contributor

joe-p commented Jun 6, 2024

As we start to make decisions I will be slowly merging features into TEALScript so the jump is not so sudden.

Ints option 5 merged here: algorandfoundation/TEALScript#489

bytes option 4 PR here: algorandfoundation/TEALScript#490

We can't really merge bytes yet because it's implicitly tied to ABI string as well and a decision needs to be made there. I agree option 4 is the best option here, but there's definitely some slight variations that we should wait for developer feedback before setting anything in stone.

@tristanmenzel tristanmenzel merged commit 7f87b6e into main Jun 7, 2024
@tristanmenzel tristanmenzel deleted the primitive-adrs branch June 7, 2024 17:04
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants