Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crate local state for procedural macros? #44034

Open
LukasKalbertodt opened this issue Aug 22, 2017 · 29 comments
Open

Crate local state for procedural macros? #44034

LukasKalbertodt opened this issue Aug 22, 2017 · 29 comments
Labels
A-macros Area: All kinds of macros (custom derive, macro_rules!, proc macros, ..) A-proc-macros Area: Procedural macros C-feature-request Category: A feature request, i.e: not implemented / a PR. T-lang Relevant to the language team, which will review and decide on the PR/issue.

Comments

@LukasKalbertodt
Copy link
Member

I'm tinkering a bit with procedural macros and encountered a problem that can be solved by keeping state in between proc macro invocations.

Example from my real application: assume my proc-macro crate exposes two macros: config! {} and do_it {}. The user of my lib is supposed to call config! {} only once, but may call do_it! {} multiple times. But do_it!{} needs data from the config!{} invocation.

Another example: we want to write a macro_unique_id!() macro returning a u64 by counting internally.


How am I supposed to solve those problems? I know that somewhat-global state is usually bad. But I do see applications for crate-local state for proc macros.

@shepmaster shepmaster added A-macros Area: All kinds of macros (custom derive, macro_rules!, proc macros, ..) C-feature-request Category: A feature request, i.e: not implemented / a PR. T-lang Relevant to the language team, which will review and decide on the PR/issue. labels Aug 25, 2017
@abonander
Copy link
Contributor

abonander commented Mar 8, 2018

Statics and thread-locals should both be safe to use as the proc-macro crate is loaded dynamically and remains resident for the duration of the macro-expansion pass for the current crate (each crate gets its own compiler invocation). This is not necessarily stable as eventually we want to load proc-macros as child processes instead of dynamic libraries, but I don't see why they wouldn't be kept alive for the duration of the crate's compilation run anyway.

@durka
Copy link
Contributor

durka commented Mar 8, 2018

@abonander I don't think this is reliable for two reasons:

  1. Proc macros may not be run on every compilation, for instance if incremental compilation is on and they are in a module that is clean

  2. There is no guarantee of ordering -- if do_it! needs data from all config! invocations, that's a problem.

@matprec
Copy link
Contributor

matprec commented May 28, 2018

Adressing ordering

Declare dependency of macros to enable delaying of macro execution. In practical terms, think of macro Foo and Bar. By declaring macro Bar depending on Foo, all invocations of Foo must complete before any invocation of Bar.
E.g.

#[proc_macro_derive(Foo)]
pub fn foo(input: TokenStream) -> TokenStream {
    ...
}
#[proc_macro_derive(Bar, depends_on(Foo))]
pub fn bar(input: TokenStream) -> TokenStream {
    ...
}

Adressing incremental compilation

A persistant storage, maybe a web-like "local storage", per proc-macro-crate? This would store and load a byte array, which the user could (de-)serialize with e.g. serde fn set_state(Vec<u8>), fn get_state() -> Vec<u8>
Don't know about access though, how would it be provided to the proc macro crate? Global Memory? Wrapped in a Mutex?

Emerging questions

  1. Could a storage system be implemented in a crate? Assuming cargo project layout, store serde state in the target folder as files?
  2. How much state should macros have, if at all? Should the same macro have access to state from previous invocations?
    • Pro
      • error message if variable name already in use
      • shared counter
    • Contra
      • Harder invalidation of artefacts because now, in worst case, every macro of that kind has to be reinvoked to reach the state when doing semantically equivalent changes like reformatting

@Thermatix
Copy link

Has there been any movement on this issue?

@oli-obk
Copy link
Contributor

oli-obk commented Apr 25, 2019

A persistant storage, maybe a web-like "local storage", per proc-macro-crate?

The problem with such a scheme is that invocations of the same macro are still not ordered, so you can easily end up in a situation where the order of invocation changes the result.

If an ordering scheme between proc macros is implemented, one could consider giving bar read access to foo's storage. Though, this would depend on foo only ever being called once per crate (due to the output of multiple foo calls having unspecified order).

@Thermatix
Copy link

Thermatix commented Aug 9, 2019

Perhaps I'm missing some nuance or information but, what if when you defined local storage you also had to define any and all files affected by this? Then when you re-compiled it would then scan those files for changes and re-compile as appropriate?

Whilst I'm all for the idea of automating where possible, the advantage is that you can now get a clear list of files involved, and it would provide some working functionality that would provide what this issue is trying to solve, even if it's not perfect, so long as it's reasonably ergonomic (despite having to list the files) it should be good enough.

Yes, you do have to define each file that gets affected, but no doubt that there is a way to automate even that.

@ZNackasha
Copy link

I would love to have this future to solve the PyO3 add_wrapped requirement.
https://github.com/PyO3/pyo3

@andrewreds
Copy link

A different approach. What do people think?

Stateful Macros

(I need better names, syntax etc. But the idea should be there)

Have a "stateful macro", which is a compile time struct.

Things run in 3x main steps:

  1. The stateful macro object is initialized with the new!() call
// (May need to be tagged for compiler parsing reasons)
const db: postgresql::DBMacro = postgresql::new!(version => 42, schema => "cats.sql")
  1. The stateful macro object can have procedural macros called off it
fn count_cats(con: DBConnection, cuteness: u64) -> u64 {
    // sql! is able to read from data parsed in from new!
    // eg, could type check the database query
    // if a field in db is a set, then it can be added to, but not read from
    // otherwise it can be read from, but not written to
    db.sql!(con, select count(*) from cats where cuteness >= $cuteness)
}
  1. The macro object can have "delayed symbols"
fn main() {
    let con = postgresql::Connection::new('10.42.42.42');

    // all_queries is generated from a function that runs in a later stage
    // of the compilation, and gets linked in like normal.
    // It must have a fixed type signature (can change biased on params to new!)
    // The function is able to read from all of db's variables, but write to none
    con.prepare(db.all_queries!);
    // expanded to: con.prepare(magic_crate_db::all_queries);

    // use the preprocessed query
    println!("There are {} extra cute cats!", count_cats(con, 4242));
}

note: Changes to the values of delayed symbols don't require recompilation of the crate using it.

Another example: for the sql! macro, it may want a unique id, that it can reference this query to the db by. The sql! macro could insert a symbol who's content is the position the query got inserted into the set (evaluated in stage 3). The sql! macro would not be able to see the contents of this symbol, but can inject it into the code.

compiler wise, crates are compiled independently like normal till just before they need to be linked. The compiler would then:

  • aggregate the sets together
  • grab a list of all delayed symbols
  • dynamically build a new crate by calling functions within the stateful macro
    • parsing in the stateful macro's state to work out the content of the delayed symbols
  • compile & link this crate in like normal

Use cases

  • Handler subscriptions (web router, command line flags, event handler etc)
  • Plugin architecture
  • Preprocessing rust data-structures
  • End user configurable macros

Addressing points raised

  • 'somewhat-global state is usually bad'
    • This design is more like standard rust objects, then global state
    • If you want 'global' state, you call new! once, and import it to all of your crates
    • If you want fine control, you call new! multiple times (even within the same file)
  • When do you recompile things?
    • Stage 2 macros need to get recompiled if changes to new! changes any of the values within the object
    • Slow, but rare
  • Stage 3 magic crate also needs to get recompiled if any of the sets change
    • Fast (as it is only recompiling a handful of functions)
  • Both of these could use hashes to detect if recompilation is needed
    • Lazy solution: Recompile stage 2 if the file containing new! is changed. Recompile stage 3 on every compile.
  • Ordering issues
    • Data can only flow into a later stage
    • new! can only be called once per instantiation
    • Sets are order free

Other points of note

  • This could get split into 2x parts
  • I think this is a much more in-depth change then what other people were saying in this thread
  • I've only written down code from the macro users perspective. I'm after peoples thoughts before looking at the harder side

@steveklabnik
Copy link
Member

To me, it feels like:

  1. properly supporting this feature means adding a new API
  2. a new API would have a large surface area
  3. this means that this should go through the RFC process.

@casey
Copy link
Contributor

casey commented May 11, 2020

@LukasKalbertodt If config!() is read-only configuration and doesn't need to be Rust code, one solution might be to add it as metadata to the root Cargo.toml:

# rest of Cargo.toml

[package.metadata.foo]
config-item = "hello"

One potential issue is ensuring that macros are re-expanded whenever the metadata key changes, which I'm not sure how to accomplish.

@sam0x17
Copy link

sam0x17 commented Nov 19, 2022

FYI I implemented a scheme very similar to this in my macro_state crate. So far it seems to avoid most of the pitfalls. If not, pull requests welcome!

@recatek
Copy link

recatek commented Apr 29, 2023

Would it be possible to mark a proc macro as always-dirty, or that it shouldn't be cached? This might be necessary for stateful proc macros. It's currently unclear which parts of the toolchain cache macro invocations, if any, and which might in the future.

Something like

#[proc_macro(cache = false)]

@bjorn3
Copy link
Member

bjorn3 commented Apr 29, 2023

Not caching in the incr comp cache is not enough for stateful macros. You need to throw away the state every time and make sure state for different crates isn't mixed. For rust-analyzer this requires restarting the proc macro server every time anything changes and prevents reusing the same proc macro server for multiple crates. There is no way to restart the proc macro server and re-expand the whole crate on every keystroke (inside a macro expansion) fast enough to not add noticable latency. Depending on the amount of macro invocations this could take multiple seconds, but even just restarting the proc macro server on windows would be noticable I think.

@Thermatix
Copy link

@bjorn3 Could you not provide a way to mark macro as being state-full vs not so? Then in the macro-server you allow only those that are marked as such to be constantly re-expanded?

@bjorn3
Copy link
Member

bjorn3 commented Aug 10, 2023

Constantly re-expanding would be bad for latency in an IDE. You only have like 100ms after typing to compute autocompletion suggestions without it feeling sluggish. Re-expanding all macros of a specific kind and invalidating all necessary caches on every keystroke may well cost a significant part of that budget.

And that still doesn't solve the issue of ordering. Rustc may currently run everything in source order, but rust-analyzer will lazily expand out of order.

@Thermatix
Copy link

One of the hardest things in programming, Cache-Invalidation le-sigh

@Eliah-Lakhin
Copy link

Eliah-Lakhin commented Mar 12, 2024

I believe the approach proposed by David Tolnay in his crate linkme solves a huge part of this topic's problem without any side-effect pitfalls to the current compiler's workflow (without any harms to the incremental compiler and/or IDE plugins in particular).

Instead of introducing of the global state in the macro expansion stage, or the life-before-main in the runtime stage, the crate utilizes linker capabilities to collect related metadata across the compilation module.

Under the hood the linkme crate introduces two macros: the first one assigns a common linker section name to the Rust statics which enforces the linker to concatenate all such static values into a single slice during the linking stage, and the second macro that associates another static with this link section (that would point to the slice assembled by the linker).

With this approach you can implement a macro that would "write" the data into a common shared slice in uncertain order, but you wouldn't be able to "read" this data from the macros code (during the macro expansion), you would have to manage this slice in runtime.

This solution solve's the problem of this Issue only partially:

  1. It provides a safe way to globally expose the data shared by the macros code.
  2. But it doesn't establish dependencies between macros, and it doesn't make the macros stateful by itself.

Hopefully it should be enough to solve most of the problems when one would practically need macros intercommunication (e.g. a plugin system in some way).

One of the drawbacks of the current linkme implementation is that the implementation is platform-dependent. The linkers have slightly different interfaces depending on the target platform, and in particular the wasm target doesn't have a linker at all.

David's idea to address this issue is to raise an API similar to the linkme's API to the Rust's syntax level such that one would use linking capabilities for statics without the need to write platform-dependent code.

I believe this proposal needs more visibility by the Rust community.

@sam0x17
Copy link

sam0x17 commented Mar 12, 2024

You need to throw away the state every time and make sure state for different crates isn't mixed

In my experience, having a build number that increments with every new build and/or a const timestamp that you generate right at the start of a build, and throwing out everything that is older than that is actually sufficient and works in most cases, but is entirely dependent on the coincidence that right now the compiler processes files top to bottom one at a time. For needier use-cases, you generate a random u64 and throw out anything not built with that particular u64, resulting in that always-dirty behavior. You can also achieve something similar using mutexes in your proc-macro crate. This isn't the best solution obviously, but it's definitely not an insurmountable problem, especially if we are making tweaks to the actual build process and not just trying to hack around this limitation in current stable. If we can hack it to work 99% of the time in stable, we could definitely tweak the build process to close that last 1%.

for a really cursed implementation of this that I don't endorse at all see https://github.com/sam0x17/macro_state/blob/main/macros/src/macros.rs

that said, I think going forward something like the linkme approach is probably the way to go, but would be nice if this wasn't dependent on platform-specific nuances

@bjorn3
Copy link
Member

bjorn3 commented Mar 12, 2024

In my experience, having a build number that increments with every new build and/or a const timestamp that you generate right at the start of a build, and throwing out everything that is older than that is actually sufficient and works in most cases, but is entirely dependent on the coincidence that right now the compiler processes files top to bottom one at a time.

That doesn't work for rust-analyzer. Rust-analyzer handles proc-macros for all crates in a single process and doesn't restart this process between rebuilds. Only when the proc-macro itself got rebuilt. Rust-analyzer will cache all macro expansions between edits. In the past it was possible for macro expansion to get evicted from the cache, but this is no longer the case as non-determinism in some macros caused rust-analyzer to crash after recomputing an evicted macro expansion.

Rust-analyzer doesn't have any issues with the linkme approach however.

@dbsxdbsx
Copy link

dbsxdbsx commented Apr 16, 2024

You need to throw away the state every time and make sure state for different crates isn't mixed

In my experience, having a build number that increments with every new build and/or a const timestamp that you generate right at the start of a build, and throwing out everything that is older than that is actually sufficient and works in most cases, but is entirely dependent on the coincidence that right now the compiler processes files top to bottom one at a time. For needier use-cases, you generate a random u64 and throw out anything not built with that particular u64, resulting in that always-dirty behavior. You can also achieve something similar using mutexes in your proc-macro crate. This isn't the best solution obviously, but it's definitely not an insurmountable problem, especially if we are making tweaks to the actual build process and not just trying to hack around this limitation in current stable. If we can hack it to work 99% of the time in stable, we could definitely tweak the build process to close that last 1%.

for a really cursed implementation of this that I don't endorse at all see https://github.com/sam0x17/macro_state/blob/main/macros/src/macros.rs

that said, I think going forward something like the linkme approach is probably the way to go, but would be nice if this wasn't dependent on platform-specific nuances

I am glad to see such a crate for storing macro state.
In my case, there are 2 macros, the 2ed one(attribute macro) need to take the info from the 1st one(functional macro), and

The core code in 1st macro is :

 if proc_has_state(&trait_name_str) {
        proc_clear_state(&trait_name_str).unwrap();
    }
    proc_append_state(&trait_name_str, &caller_path).unwrap();
    ...

And it is fetched from 2ed macro like this:

   let trait_infos = proc_read_state_vec(&trait_name_str);
...

But finally, it still panic at the 2ed macro with msg index out of bounds: the len is 0 but the index is 0, I guess it is due to the 2ed macro is initialled first. Is there a way to fix it?

(By the way, I found anthor approach from crate enum_dispatch only taking raw global vars. But I didn't get it.)

@sam0x17
Copy link

sam0x17 commented Apr 16, 2024

btw a much cleaner way of doing this is with the outer macro pattern

@dbsxdbsx
Copy link

btw a much cleaner way of doing this is with the outer macro pattern

If the outer macro pattern means that Defining an inner declarative macro to carry data in a procedural macro, then I've done it in my case. Here, the trait_variable is the 1st macro, and trait_var is the 2ed one.

For my case, there are 3 reasons why I want to use local statement to replace the inner declarative macro in 1st macro.

  1. procedural macro is more powerful than declarative macro---for future maintenance, I hope to replace decl-macro soon;
  2. I don't know why, for current code, if 1st and 2ed macros are used in the same file, the 2ed macro user code must be invoked after/under the 1st one, otherwise, compiler error with msg like "cannot find macro MyTrait_for_struct in this scope
    have you added the #[macro_use] on the module/import?", here MyTrait_for_struct is the hidden decl-maro produced by the 1st procedural macro trait_variable;
  3. Maybe the same reason for 2--- If the 1st macro is used in src/folder/mod.rs, and 2ed macro is used in sub file like src/folder/struct_1.rs, Then the same issue msg occurs, and more severe, it can't be fixed by importing the declarative macro.

I am so frustrated with the #[macro_use] issue days ago. So I was seeking to replace the whole declarative macro part with pure procedural macro.

By the way, just for my specific case, I realize it could be done without using local state or outer macro pattern . Since I just need to know the file path of the specific trait within the 1st macro when calling the 2ed macro, then the 2ed macro can directly do the search to find it (with your approach stated here)based on the trait ident as input of the attribute of the 2ed macro. In this way, the only flaw is there is a little overhead due to the search task in the 2ed macro.

@sam0x17
Copy link

sam0x17 commented Apr 17, 2024

so to be clear, an example of the outer macro pattern would be:

#[my_outer_attribute]
mod my_mod {
    #[attribute_1]
    pub fn some_fn() {
        #[attribute_2]
        let some_expr = 33;
    }

    #[attribute_1]
    pub fn some_other_fn() {
        println!("hello world");
    }
}

in the parsing code for #[my_outer_attribute] you would then write a visitor pattern that finds all the underlying attributes and any other custom syntax you want to implement, looping over the code for the entire module/item and replacing these custom macro invocations with whatever you actually want them to do. From this outer perspective, you can do things like aggregate information between all attributes, or anything needing global state, without violating the rules. Also since this is before the actual AST of the module is evaluated as rust, your code only needs to tokenize as valid rust, it does not need to be valid rust (pre-expansion), so for example you can do normally illegal things like have attributes in expr position like above, as long as you remove these during your outer macro expansion.

all of those inner macros, in turn, need not even be defined as real macros, since during your visitor pattern you will simply be finding and removing/replacing their invocations. For convenience and doc reasons it is common to provide stubs for these that compile-error if used, allowing you to attach rust docs to them (in a way that will be picked up by rust analyzer) and to force them to only be valid in the context of your outer macro pattern. They then become very similar to derive helpers.

A very full example of this is the pallet syntax in substrate: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/balances/src/lib.rs#L191-L1168

The main caveat with this approach is it must be on a non-empty module and there is no (easy) way to spread this out over multiple files. Everything has to be in the same file that you want global access to.

For more exotic things where you want to legally access the tokens of some foreign item in a proc macro, you can use my macro_magic pattern: https://crates.io/crates/macro_magic. You'll notice the derive_impl stuff in the code linked above uses that pattern. A strong example of this is my supertrait crate which allows you to emulate default associated types in stable rust: https://crates.io/crates/supertrait

@dbsxdbsx
Copy link

dbsxdbsx commented Apr 28, 2024

@sam0x17, thanks your detailed suggestion, since supertrait is quite bizarre, I am still chewing it.
By the way, I want to add statements like pub x:i32; in a trait with macro usage, which is invalid rust syntax, is it impossible to do so with attribute macro? I've done it with functional macro, but not attribute macro yet---your supertrait makes function with prefix const valid with attribute macro, but at least the whole fn statement is a parsable fn item in Syn. I am not sure if my case is suitable within attribute macro.

@aikixd
Copy link

aikixd commented Jun 21, 2024

Macro doesn't have to have a state to aggregate data. The macro function can return a (TokenStream, T). That T, can then be consumed by a dependent macro via an iterator or a slice as a parameter. This way caching isn't an issue - the custom values are going to be in sync with the output. Only the execution order will need to be implemented.

@dbsxdbsx
Copy link

dbsxdbsx commented Jun 21, 2024

return a (TokenStream, T).

@aikixd, as far as I know, a procedural macro should return proc_macro::TokenStream, is there an example for your case?

@aikixd
Copy link

aikixd commented Jun 21, 2024

Currently it must return the stream only, but it can be expanded, with additional macro, enum, or any other way.

My line of though is the following: most of the cases where a state is required is to aggregate some data (perhaps rewriting the code along the way) and then reduce it to something. A common example would be marking handlers in web services:

#[get("/a")]
fn handlerA(context: &mut ctx) { ... } 

#[get("/b")]
fn handlerB(context: &mut ctx) { ... } 

fn main() {
    HttpServer::new().with_handlers(get_handlers!());
}

This can be implemented like so (i'm using an enum approach):

#[proc_macro_attribute]
pub fn get(...) -> proc_macro::MacroResult {
    ...
    proc_macro::MacroResult::Data(my_data)
}

#[proc_macro(depends_on: get)]
pub fn get_handlers(item: TokenStream, aggr: HashMap<&str, &[MyDataType]>) -> ... { ... }

The aggregate would contain the data from all the dependents. I'm not sure how to reconcile the data type, the presented signature wouldn't suffice.

A lot of problems can be described in terms of reduction, so this would cover a lot of ground. Also, this approach doesn't clash with the caching, but works in harmony with it. Rules for updating the user data are the same as updating the token stream. Also, the analyzer will need to be updated accordingly.

@sam0x17
Copy link

sam0x17 commented Jun 21, 2024

@aikixd the standard way of doing this (especially for things like URL handler attributes, which is one I've specifically implemented this way before) is to use the outer macro pattern. The short explanation is you will need to put all your routes in a mod declaration and have an attribute that attaches to that. That attribute then will have the proper context to loop over all the function definitions inside the (locally defined, NOT in another file) module, and manually parse + consume the URL handler attributes.

In this way you can aggregate state info across the whole module while you parse it, instead of being limited to just a single function definition like you usually are. This is the outer macro pattern.

so basically this:

#[build_routes]
pub mod routes {
    #[get("/a")]
    fn handlerA(context: &mut ctx) { ... } 
    
    #[get("/b")]
    fn handlerB(context: &mut ctx) { ... } 
}

fn main() {
    HttpServer::new().with_handlers(routes);
}

A talk I did a while ago covers this fully here: https://youtu.be/aEWbZxNCH0A?si=ToRhOiM26FkBJK8P&t=1989

side note that if custom inner attributes ever stabilize, you can use this approach for entire files. Right now it has to be a mod definition inside the current file

@sam0x17
Copy link

sam0x17 commented Jun 21, 2024

Another more exotic way you can do this is using the macro_magic token teleportation trick (which is completely legal in stable rust and doesn't violate the pure-functionness of proc macros), where you make a custom macro_magic attribute for processing get handlers and then when you build your routes you list each of them by path.

this would look something like:

#[get("/a")]
fn handlerA(context: &mut ctx) { ... }

#[get("/b")]
fn handlerB(context: &mut ctx) { ... }

fn main() {
    HttpServer::new().with_handlers(routes![handlerA, handlerB]);
}

but note the one thing you cannot do with this approach is know the full list of handlers -- they have to be written out in main.

https://crates.io/crates/macro_magic for some relevant examples

under the hood, the way this works is #[get("/a")] will expand to the regular item plus a macro_rules that, when called, provides the caller with the tokens of the item the attribute was attached to. Then the routes! macro calls each of these based on the provided path and is thus able to collect whatever context info you need to actually build your routes, but again, you still need to know the path for each route declaration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-macros Area: All kinds of macros (custom derive, macro_rules!, proc macros, ..) A-proc-macros Area: Procedural macros C-feature-request Category: A feature request, i.e: not implemented / a PR. T-lang Relevant to the language team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests