Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

macro: Not working inside impl blocks #16

Open
behnam opened this issue Oct 5, 2018 · 19 comments
Open

macro: Not working inside impl blocks #16

behnam opened this issue Oct 5, 2018 · 19 comments

Comments

@behnam
Copy link

behnam commented Oct 5, 2018

The cached! macro doesn't work inside struct's impl block. (Haven't tried other impl blocks.)

Repro:

#[macro_use]
extern crate cached;

#[macro_use]
extern crate lazy_static;

cached! {
    FIB;
    fn fib(n: u64) -> u64 = {
        if n == 0 || n == 1 { return n }
        fib(n-1) + fib(n-2)
    }
}

struct Foo {}

impl Foo {
    cached! {
        ANOTHER_FIB;
        fn another_fib(n: u64) -> u64 = {
            if n == 0 || n == 1 { return n }
            another_fib(n-1) + another_fib(n-2)
        }
    }
}

fn main() {
    println!("fib(10): {}", fib(10));

    let _ = Foo {};
    println!("another_fib(10): {}", another_fib(10));
}

Error:

error: expected one of `async`, `const`, `crate`, `default`, `existential`, `extern`, `fn`, `pub`, `type`, or `unsafe`, found `struct`
  --> src/main.rs:18:5
   |
18 |        cached! {
   |   _____^
   |  |_____|
   | ||
19 | ||         ANOTHER_FIB;
20 | ||         fn another_fib(n: u64) -> u64 = {
21 | ||             if n == 0 || n == 1 { return n }
22 | ||             another_fib(n-1) + another_fib(n-2)
23 | ||         }
24 | ||     }
   | ||     ^
   | ||_____|
   | |______expected one of 10 possible tokens here
   |        unexpected token
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

error: aborting due to previous error
@jaemk
Copy link
Owner

jaemk commented Oct 5, 2018

Unfortunately this won't work. cached! expands to a lazy_static block and a function def. The lazy_static block can't live directly in the impl block. (related #6)

@jaemk
Copy link
Owner

jaemk commented Oct 5, 2018

I added a note on this to the readme (3dea1a1). I may be able to work around this limitation when adding a proc-macro version (brought up in #13)

@behnam
Copy link
Author

behnam commented Oct 5, 2018

Would it make sense to have a pair of macros, one for the cache-name/lazy_static, and another one only the fn item, taking the name of the cache as input, which can sit inside impl blocks?

If course, the cache key type needs to again explicit here. I think that's a fairly small cost to pay.

@jaemk
Copy link
Owner

jaemk commented Oct 8, 2018

Yes, that's definitely an option! The existing macros are in need of a re-think anyway

@Rahix
Copy link
Contributor

Rahix commented Oct 22, 2018

As a workaround, the following does the job for now:

#[macro_use] extern crate cached;

use std::time::{Instant, Duration};
use std::thread::sleep;

use cached::SizedCache;

struct Foo;

impl Foo {
    fn slow_fn(&self, n: u32) -> String {
        cached! {
            SLOW_FN: SizedCache<(u32), String> = SizedCache::with_size(50);
            fn inner(n: u32) -> String = {
                if n == 0 { return "done".to_string(); }
                sleep(Duration::new(1, 0));
                inner(n-1)
            }
        }

        inner(n)
    }
}


pub fn main() {
    println!("Initial run...");
    let now = Instant::now();
    let foo = Foo;
    let _ = foo.slow_fn(10);
    println!("Elapsed: {}\n", now.elapsed().as_secs());

    println!("Cached run...");
    let now = Instant::now();
    let _ = foo.slow_fn(10);
    println!("Elapsed: {}\n", now.elapsed().as_secs());

    // Inspecting the cache is not possible this way
}

@kirhgoff
Copy link

kirhgoff commented Nov 1, 2018

Nice one, I still have problems with using self references (or any var outside of cached block - "can't capture dynamic environment in a fn item") in the cached function, I understand the problem behind that, you can change the state and cache will be giving wrong values, but in my case I am ok with that, I just need a cache. Sad cannot use it, need to build something on my own. Any suggestions?

@Rahix
Copy link
Contributor

Rahix commented Nov 1, 2018

Hmm, you could give self as a parameter to the inner function, but that would require your struct to be Eq, Hash and worst of all: Clone.

I think the cached macros won't do the job here. But you can implement it yourself! Example:

extern crate cached;

use std::time::{Instant, Duration};
use std::thread::sleep;

use cached::SizedCache;

pub struct Foo {
    bar: u32,
}

impl Foo {
    pub fn new(bar: u32) -> Foo {
        Foo {
            bar,
        }
    }

    pub fn slow_fn(&self, n: u32) -> String {
        // The cache is static.  Please don't get scared from this monster ;)
        static SLOW_FN_CACHE: cached::once_cell::sync::Lazy<::std::sync::Mutex<SizedCache<(u32), String>>> =
            cached::once_cell::sync::Lazy {
                __cell: cached::once_cell::sync::OnceCell::INIT,
                __init: || std::sync::Mutex::new(SizedCache::with_size(50)),
        };

        // We need to clone the key so we can later on store it in the cache.
        let key = n.clone();

        // This block locks the cache and checks if we have a cache-hit
        {
            let mut cache = SLOW_FN_CACHE.lock().unwrap();
            let res = cached::Cached::cache_get(&mut *cache, &key);
            if let Some(res) = res {
                // Return early in that case
                return res.clone();
            }
        }

        // If we have missed the cache, compute the value.
        // This is where your code goes!
        let val = {
            // You can access `self` here!
            if n == self.bar {
                return "done".to_string();
            }
            sleep(Duration::new(1, 0));
            // Even call the cached function recursively!
            self.slow_fn(n - 1)
        };

        // Write the value to the cache so we get a hit next time
        let mut cache = SLOW_FN_CACHE.lock().unwrap();
        cached::Cached::cache_set(&mut *cache, key, val.clone());
        val
    }
}


pub fn main() {
    println!("Initial run...");
    let now = Instant::now();
    let foo = Foo::new(42);
    let _ = foo.slow_fn(52);
    println!("Elapsed: {}\n", now.elapsed().as_secs());

    println!("Cached run...");
    let now = Instant::now();
    let _ = foo.slow_fn(52);
    println!("Elapsed: {}\n", now.elapsed().as_secs());

    // Inspecting the cache is not possible this way
}

This is essentially what the cached! macro expands to. You need to adapt the key to hold all parameters + the relevant internal state from the struct though, which I haven't done here ...

I don't know if this is a good solution, but at least it would work, I guess ...

@kirhgoff
Copy link

kirhgoff commented Nov 3, 2018

Thanks, will try this approach

@Rahix
Copy link
Contributor

Rahix commented Nov 4, 2018

@jaemk: The problem @kirhgoff encountered displays an API oversight in my opinion:

Cached functions should be able to take 'context' parameters that are not cached.

@jaemk
Copy link
Owner

jaemk commented Nov 4, 2018

@Rahix , @kirhgoff - Yeah, the cached macros really need a redesign. Until that happens, could the issue of uncached "context" params be solved by using the cached_key! macro? This one lets you specify an expression (evaluated in the function's scope) that's used as the key:

cached_key! {
    LENGTH: SizedCache<String, usize> = SizedCache::with_size(50);
    Key = { format!("{}{}", a, b) };
    fn length(a: &str, b: &str, c: &str, d: &str) -> usize = {
        let size = a.len() + b.len();
        println!("ignoring: {:?}", &[c, d]);
        sleep(Duration::new(size as u64, 0));
        size
    }
}

pub fn main() {
    println!("Initial run...");
    length("1", "123", "not", "cached");

    println!("again ...");
    length("1", "123", "still not", "cached");
}

So instead of capturing the environment, you could pass what you need as a reference and exclude it from the cache-key.

@flavius
Copy link

flavius commented Apr 14, 2019

This would be such a good feature to have.

@naiveai
Copy link

naiveai commented Nov 19, 2019

Hey, I know it's been a hell of a long time, but is there any news on this or any way that I can help implement this? I've been digging through the code getting a preliminary understanding, so I can make a PR in some time if there's not another simpler way to do this already. @jaemk

@BinChengZhao
Copy link

Rahix

Your example code really impressed me, thank you!

@omid
Copy link
Contributor

omid commented Aug 8, 2024

A year ago, we have removed lazy_static completely!

So now we can implement this feature.
Actually, I did implement it locally, but I want to be sure the way I'm doing is fine for you (Specially @jaemk)

The way I'm doing it is backward compatible. Just for cached, io_cached and once inside impls, you need to enable in_impl = true flag when you define them.

If you agree, show me the 👍🏼 and if not, write a comment about your use case and why it's not suitable. Or if you have a better suggestion or better naming.

For example, this code:

struct Test;

impl Test {
    #[cached(size = 50, in_impl = true)]
    pub fn slow_fn(n: u32) -> String {
        if n == 0 {
            return "done".to_string();
        }
        sleep(Duration::new(1, 0));
        Self::slow_fn(n - 1)
    }
}

Generates this:

struct Test;
impl Test {
    ///Cached static for the [`slow_fn`] function.
    pub fn slow_fn_get_cache_ident() -> &'static ::cached::once_cell::sync::Lazy<
        std::sync::Mutex<cached::SizedCache<(u32), String>>,
    > {
        static SLOW_FN: ::cached::once_cell::sync::Lazy<
            std::sync::Mutex<cached::SizedCache<(u32), String>>,
        > = ::cached::once_cell::sync::Lazy::new(|| std::sync::Mutex::new(
            cached::SizedCache::with_size(50usize),
        ));
        &SLOW_FN
    }
    ///Origin of the cached function [`slow_fn`].
    ///This is a cached function that uses the [`SLOW_FN`] cached static.
    pub fn slow_fn_no_cache(n: u32) -> String {
        if n == 0 {
            return "done".to_string();
        }
        sleep(Duration::new(1, 0));
        Self::slow_fn(n - 1)
    }
    ///This is a cached function that uses the [`SLOW_FN`] cached static.
    pub fn slow_fn(n: u32) -> String {
        use cached::Cached;
        use cached::CloneCached;
        let key = (n.clone());
        {
            let mut cache = Self::slow_fn_get_cache_ident().lock().unwrap();
            if let Some(result) = cache.cache_get(&key) {
                return result.to_owned();
            }
        }
        let result = Self::slow_fn_no_cache(n);
        let mut cache = Self::slow_fn_get_cache_ident().lock().unwrap();
        cache.cache_set(key, result.clone());
        result
    }
    ///Primes the cached function [`slow_fn`].
    #[allow(dead_code)]
    ///This is a cached function that uses the [`SLOW_FN`] cached static.
    pub fn slow_fn_prime_cache(n: u32) -> String {
        use cached::Cached;
        let key = (n.clone());
        let mut cache = Self::slow_fn_get_cache_ident().lock().unwrap();
        let result = Self::slow_fn_no_cache(n);
        cache.cache_set(key, result.clone());
        result
    }
}

@sirno
Copy link

sirno commented Aug 30, 2024

I'd be happy to get access to this! I suppose this could also be extended to support self arguments?

@omid
Copy link
Contributor

omid commented Aug 30, 2024

@sirno my changes are very dependent on #218
Which is there for 3 weeks :/

I suppose this could also be extended to support self arguments?

I don't see any case that it couldn't. But I also haven't tested it.

@jaemk
Copy link
Owner

jaemk commented Aug 30, 2024

Thanks @omid ! The only thing missing is providing a way to access the cache since you can't grab the static directly when it's defined in the cache function

@omid
Copy link
Contributor

omid commented Aug 30, 2024

@jaemk isn't calling slow_fn_get_cache_ident enough to have access to it?

@jaemk
Copy link
Owner

jaemk commented Aug 30, 2024

Yes, you're right, looks good! I'll take a closer look and merge later today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants