Skip to content

CPS-0028? | Approaches to call-by-need in UPLC#1150

Open
kozross wants to merge 6 commits intocardano-foundation:masterfrom
mlabs-haskell:laziness
Open

CPS-0028? | Approaches to call-by-need in UPLC#1150
kozross wants to merge 6 commits intocardano-foundation:masterfrom
mlabs-haskell:laziness

Conversation

@kozross
Copy link
Copy Markdown
Contributor

@kozross kozross commented Feb 12, 2026

This describes possible ways forward to achieve 'true' laziness as part of UPLC. This is meant to be an additional mechanism to Delay and Force, as these constructs have no notion of whether something has already been evaluated or not. What we are discussing specifically adds the capability of 'remembering' what has already been evaluated, similarly to how laziness works in GHC Haskell, for example.

Rendered

@rphair rphair changed the title True laziness in UPLC CPS-???? | Approaches to true laziness in UPLC Feb 12, 2026
Copy link
Copy Markdown
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marked Triage for next CIP meeting: https://hackmd.io/@cip-editors/128 ... tagging @effectfully @colll78 in the meantime to kick-start some review.

@rphair rphair added Category: Plutus Proposals belonging to the 'Plutus' category. State: Triage Applied to new PR afer editor cleanup on GitHub, pending CIP meeting introduction. labels Feb 12, 2026
@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Feb 12, 2026

@rphair - once again I feel, due to costing concerns in the CPS, @kwxm may also have something to add here.

Copy link
Copy Markdown
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kozross much as posted in #1146 (comment), the CIP meeting consensus today was that your early-expressed goal of @kwxm (or equivalent, if there is one) providing an expert opinion about cost-related applicability of the CPS — without any practical objection over the suggested approach — would be required to proceed with this as a CPS candidate.

Also I thought I had mentioned this before but I had also tagged @effectfully above as someone who'd expressed deep interest & expertise in the details of Plutus semantics: a subject suggested by my own understanding of "laziness" in compilers. If there are additional or more approriate choices then please by all means tag them here.

@rphair rphair added State: Unconfirmed Triaged at meeting but not confirmed (or assigned CIP number) yet. and removed State: Triage Applied to new PR afer editor cleanup on GitHub, pending CIP meeting introduction. labels Feb 18, 2026
@Unisay
Copy link
Copy Markdown
Contributor

Unisay commented Mar 16, 2026

The costing section mentions that "the cost of computing a 'true' lazy computation should only be 'paid' once" — but this only addresses the CPU side.

Memoization is a time-space tradeoff: you're caching results to avoid recomputation. In UPLC's cost model, memory is budgeted separately from CPU. A few things worth thinking about here:

  • A lazy thunk starts small (just a closure), but once forced, the cached result could be significantly larger. When does the memory cost get charged — at creation or at forcing?
  • Cached values stick around for the rest of execution. Under strict evaluation, intermediates get consumed and freed right away. Memoization changes that lifetime, which could matter a lot for scripts operating near their memory budget.
  • In an adversarial on-chain environment, someone could construct a script with many cheap-looking thunks that balloon when forced. How does the memory budget account for that?

It feels like any proposal here should address the memory dimension of costing with at least as much attention as the CPU dimension — especially given the resource-constrained setting that motivates the whole CPS.

Should the Goals section include something explicit about memory behavior?

@effectfully
Copy link
Copy Markdown
Contributor

effectfully commented Mar 16, 2026

@rphair

Also I thought I had mentioned this before but I had also tagged @effectfully above as someone who'd expressed deep interest & expertise in the details of Plutus semantics: a subject suggested by my own understanding of "laziness" in compilers. If there are additional or more approriate choices then please by all means tag them here.

Sorry, I'm no longer a part of the Plutus team.

@Unisay

In an adversarial on-chain environment, someone could construct a script with many cheap-looking thunks that balloon when forced. How does the memory budget account for that?

Is it any different from an application of a lambda though?

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Mar 16, 2026

@Unisay - those are excellent points, thank you! Definitely something I had overlooked. I will amend the CPS text to discuss this in more detail, and mention the issues you've raised.

Edit: Having thought about it a little and done some experiments, there's two comments I want to add in response.

The claim that "under strict evaluation, intermediates get consumed and freed right away" may be theoretically true, but this is definitely not how the SECP machine that evaluates UPLC works. To illustrate, I've put together a simple set of benchmarks comparing a 'fused' implementation of zip3 to one based on two calls to zip2. I used Plutarch as the implementation language, but this should translate to Plinth or Aiken just as well:

module Main (main) where

import PlutusCore (Contains, DefaultUni)
import Data.Kind (Type)
import Plutarch.Prelude (Term, S, PBuiltinList (PNil, PCons), 
  PInteger, pconstant, (#), (:-->), 
  phoistAcyclic, plam, (#*), (#+), pfix, pmatch, PlutusRepr, 
  pcon, (#$), ppairDataBuiltin, pdata, PIsData, pfromData,
  PBuiltinPair (PBuiltinPair))
import Test.Tasty (testGroup)
import Plutarch.Test.Bench (
  bench,
  defaultMain,
 )
import Plutarch.Test.Utils (precompileTerm)

main :: IO ()
main = defaultMain $ testGroup "Zip" [
  testGroup "pzip3Direct" [
    bench "length 5" (precompileTerm (pzip3Direct f) # plistA5 # plistB5 # plistC5),
    bench "length 10" (precompileTerm (pzip3Direct f) # plistA10 # plistB10 # plistC10),
    bench "length 15" (precompileTerm (pzip3Direct f) # plistA15 # plistB15 # plistC15)
    ],
  testGroup "pzip3Indirect" [
    bench "length 5" (precompileTerm (pzip3Indirect f) # plistA5 # plistB5 # plistC5),
    bench "length 10" (precompileTerm (pzip3Indirect f) # plistA10 # plistB10 # plistC10),
    bench "length 15" (precompileTerm (pzip3Indirect f) # plistA15 # plistB15 # plistC15)
    ]
  ]

-- Test data

plistA5 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistA5 = pconstant (iota 1 5)

plistA10 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistA10 = pconstant (iota 1 10)

plistA15 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistA15 = pconstant (iota 1 15)

plistB5 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistB5 = pconstant (iota 6 10)

plistB10 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistB10 = pconstant (iota 6 15)

plistB15 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistB15 = pconstant (iota 6 20)

plistC5 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistC5 = pconstant (iota 21 25)

plistC10 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistC10 = pconstant (iota 21 30)

plistC15 :: forall (s :: S) . Term s (PBuiltinList PInteger)
plistC15 = pconstant (iota 21 35)

f :: forall (s :: S) . Term s (PInteger :--> PInteger :--> PInteger :--> PInteger)
f = phoistAcyclic $ plam $ \x y z -> x #* (y #+ z)

pzip3Direct :: forall (a :: S -> Type) (b :: S -> Type) (c :: S -> Type) (d :: S -> Type) (s :: S) .
  (DefaultUni `Contains` PlutusRepr a,
    DefaultUni `Contains` PlutusRepr b,
    DefaultUni `Contains` PlutusRepr c,
    DefaultUni `Contains` PlutusRepr d) =>
  Term s (a :--> b :--> c :--> d) -> 
  Term s (PBuiltinList a :--> PBuiltinList b :--> PBuiltinList c :--> PBuiltinList d)
pzip3Direct g = pfix $ \self -> plam $ \xs ys zs -> pmatch xs $ \case
  PNil -> pcon PNil
  PCons x xs' -> pmatch ys $ \case
    PNil -> pcon PNil
    PCons y ys' -> pmatch zs $ \case
      PNil -> pcon PNil
      PCons z zs' -> pcon . PCons (g # x # y # z) $ self # xs' # ys' # zs'

pzip3Indirect :: forall (a :: S -> Type) (b :: S -> Type) (c :: S -> Type) (d :: S -> Type) (s :: S) .
  (DefaultUni `Contains` PlutusRepr a,
    DefaultUni `Contains` PlutusRepr b,
    DefaultUni `Contains` PlutusRepr c,
    DefaultUni `Contains` PlutusRepr d,
    PIsData b,
    PIsData c) =>
  Term s (a :--> b :--> c :--> d) -> 
  Term s (PBuiltinList a :--> PBuiltinList b :--> PBuiltinList c :--> PBuiltinList d)
pzip3Indirect g = plam $ \xs ys zs -> 
  pzip2 (plam $ \x yz -> pmatch yz $ \case
    PBuiltinPair y z -> g # x # pfromData y # pfromData z) # xs #$ pzip2 (plam $ \y z -> ppairDataBuiltin # pdata y # pdata z) # ys # zs

-- Helpers

iota :: Integer -> Integer -> [Integer]
iota start stop = [start, start + 1 .. stop]

pzip2 :: forall (a :: S -> Type) (b :: S -> Type) (c :: S -> Type) (s :: S) . 
  (DefaultUni `Contains` PlutusRepr a,
    DefaultUni `Contains` PlutusRepr b,
    DefaultUni `Contains` PlutusRepr c) => 
  Term s (a :--> b :--> c) -> 
  Term s (PBuiltinList a :--> PBuiltinList b :--> PBuiltinList c)
pzip2 g = pfix $ \self -> plam $ \xs ys -> pmatch xs $ \case
  PNil -> pcon PNil
  PCons x xs' -> pmatch ys $ \case
    PNil -> pcon PNil
    PCons y ys' -> pcon . PCons (g # x # y) $ self # xs' # ys'

These are the results when run:

Benchmark zip: RUNNING...
Zip
  pzip3Direct
    length 5:  OK
      CPU  | 4442715
      MEM  | 19780  
      SIZE | 80     
    length 10: OK
      CPU  | 8565330
      MEM  | 37460  
      SIZE | 97     
    length 15: OK
      CPU  | 12687945
      MEM  | 55140   
      SIZE | 114     
  pzip3Indirect
    length 5:  OK
      CPU  | 8422685
      MEM  | 40740  
      SIZE | 141    
    length 10: OK
      CPU  | 16125270
      MEM  | 76880   
      SIZE | 157     
    length 15: OK
      CPU  | 23827855
      MEM  | 113020  
      SIZE | 174     

As we can see, pzip3Indirect (which uses pzip2) uses just about twice as much memory as pzip3Direct on the same arguments. This suggests that the cost for the intermediate very much is still paid, rather than 'loaned and returned'. This is consistent with other work I have done, including pull arrays in Plutarch, which are specifically designed to fuse away intermediates with CIP-138 array computations. Thus, I don't think this is a concern.

With regard to the adversarial use of thunks you describe, I agree with @effectfully - this is no different to having lambdas with big blowouts in memory use. Am I missing something?

@effectfully
Copy link
Copy Markdown
Contributor

The claim that "under strict evaluation, intermediates get consumed and freed right away" may be theoretically true, but this is definitely not how the SECP machine that evaluates UPLC works.

The on-chain CEK machine is capable of freeing actual memory (not to the extent of the CESK machine), but yes, it's completely irrelevant, because MEM doesn't measure peak memory consumption.

With regard to the adversarial use of thunks you describe, I agree with @effectfully - this is no different to having lambdas with big blowouts in memory use. Am I missing something?

You can ask Claude directly and avoid going through the low-bandwidth interface (@Unisay).

@Unisay
Copy link
Copy Markdown
Contributor

Unisay commented Mar 17, 2026

You can ask Claude directly and avoid going through the low-bandwidth interface (@Unisay).

Agree! As long as you know the correct question to ask (I do) - you should.
Moreover, I'd integrate the AI-review as an automated CI check (cc: @rphair wdyt about this idea?)

@Unisay
Copy link
Copy Markdown
Contributor

Unisay commented Mar 17, 2026

it's completely irrelevant, because MEM doesn't measure peak memory consumption.

This raises (at least) a few questions:

  • What does MEM (actually) measure if not the peak memory consumption?
  • What exactly should MEM measure in theory?
  • Is there a discrepancy?
    • If there is: should we evaluate designs wrt former or latter or both?

I believe all answers deserve to be documented.

@effectfully
Copy link
Copy Markdown
Contributor

In theory, MEM measures something like "total allocations that aren't 100% certain to be transient".

In practice, MEM is fucked up and

  1. doubles as a CEK step counter
  2. is charged for constant-cost builtins with literally zero meaning to it, pretty much random

None of that is relevant here though. The question is "is this any worse than applied lambdas?". I think the answer is "no", but I didn't really look into it.

@Unisay
Copy link
Copy Markdown
Contributor

Unisay commented Mar 17, 2026

Plutus Report (publicly available) gives a hint about original idea

@wadler
Copy link
Copy Markdown

wadler commented Mar 17, 2026

Overall, the proposal is sensible. @zliu41 has performed preliminary experiments where applying Force to Delay stores the result, so that it is computed at most once. It turns out to be not vastly more expensive (and, of course, sometimes far less expensive) than when applying Force to Delay recomputes each time.

Terminology: The proposal uses "true laziness" to refer to the case where applying Force to Delay computes at most once, as opposed to the version that recomputes every time. Better to use the standard terminology from the literature: call-by-need -- compute at most once and store the result; call-by-name -- recompute every time. The term "lazy" is a synonym for call-by-need. Haskell is call-by-need or lazy (no need to say "true lazy") while Force-Delay in Plutus is call-by-name (not lazy, or even "false lazy").

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Mar 17, 2026

@wadler - agreed, better to use the standard terminology. Will change.

@effectfully
Copy link
Copy Markdown
Contributor

@wadler

It turns out to be not vastly more expensive

It literally doesn't matter btw. Force costs the same as LamAbs but is far "cheaper" in terms of actual CPU time. So there's plenty of leeway there, Force can be made much slower without any security repercussions whatsoever.

@zliu41
Copy link
Copy Markdown
Contributor

zliu41 commented Mar 17, 2026

lazy-delay-force.pdf

The summary of my evaluation is attached. TLDR: lazy delay/force is about 2x the cost of by-name delay/force. For well-optimized programs that don't have excessive unnecessary delay/force, the overhead is less than 3%. For poorly-optimized programs, the overhead can be about 10%, or even 90%+ in extreme cases.

Incidentally I find the motivation a bit weak, as I couldn't think of a good example where laziness is a clear win. I'll look into and try to understand the Boehm-Berrarducci use case.

@rphair
Copy link
Copy Markdown
Collaborator

rphair commented Mar 18, 2026

@Unisay #1150 (comment): Moreover, I'd integrate the AI-review as an automated CI check (cc: @rphair wdyt about this idea?)

Please correct me I'm wrong, but I've re-read the whole thread above & it doesn't sound like AI review is being recommended here for software tasks, but for CIP review itself.

As an author I've never entered my own writing into an LLM and am not really happy with crawler LLMs going over it either. As a CIP editor it tells me nothing of use because, more often than not, what might be considered errors or inconsistencies in evolving standards documents vs. the hoard of LLM knowledge are exactly what makes these outlying statements useful and "true" in the long run.

The current team of editors doesn't welcome AI in CIP review but doesn't discriminate against it either as long as the AI is used to support a review rather than to simply produce it:

  • There was one case where a drop-in AI promoter posted a copy-paste summary of a CIP into a PR review thread and we deleted it as off-topic: which it was, as an irrelevant word soup designed to sound like a sophisticated analysis but essentially just repeating the background of the problem (i.e. not the operative part of a CIP, and not the difficult & delicate scoping part of a CPS).

So it I think that putting AI CIP review into this repository's CI would be unnecessary, wasteful, irrelevant, and perhaps even expensive (relatively: even at pennies per LLM token use, this would be head & shoulders over other free tier services).

However I do think the open source nature of CIP material would encourage AI enthusiasts to build CIP summaries — with perhaps a body of review, and maybe developer highlights — on some aggregated web site using GitHub merged CIPs, PRs, and maybe even issues as a source.

If you think any of this means our current CIP process is incomplete, you can open up a repo issue and/or contribute to this one with a related consideration & in any case let's continue the discussion (if still interested) elsewhere (cc @Ryun1 @perturbing):

@rphair
Copy link
Copy Markdown
Collaborator

rphair commented Mar 18, 2026

@kozross from my distantly underqualified level it seems like the welcome arrival of subject matter experts into this thread has provided editors (cc @Ryun1 @perturbing) with confirmation that this is a viable CPS with a practical scope for CIPs to be produced within this field.

Therefore I'm adding it to the next CIP meeting agenda where I feel sure we'll now be able to confirm this as a candidate & assign a CPS number: https://hackmd.io/@cip-editors/131 — which would also provide 2 weeks to do some updates as indicated above (e.g. #1150 (comment)) & from further review.

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Mar 19, 2026

As per @wadler 's comment, I have revised all mentions of 'true' laziness to use the term 'call by need'.

Copy link
Copy Markdown
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kozross it's good to have updated the title with a better description of the proposed functionality... but we're still just considering Approaches in the CPS I think (?) and so the "feature" title would be more appropriate to an implementation CIP...

@rphair rphair changed the title CPS-???? | Approaches to true laziness in UPLC CPS-???? | Approaches to call-by-need in UPLC Mar 20, 2026
@zliu41
Copy link
Copy Markdown
Contributor

zliu41 commented Mar 24, 2026

I haven't fully understood the BB example, but regarding this argument:

Using Delay and Force here is of no help. While making the arguments to k1 (and k2) delayed would avoid the over-evaluation problem above, it leads to a different problem: re-evaluation.

Can you not avoid reevaluation by forcing the delayed expression at the right time? For example, suppose there is a large and expensive expression that is used in two of the three branches, and in one of the branches it is used
twice. You can do this to avoid unnecessary evaluation (this was suggested to me by @wadler):

     let x = delay expensive in
     case scrut of
       A -> ... force x ...
       B -> let y = force x in ... y ... y ...
       C -> ...

Does this not apply to the BB encoding example?

@rphair rphair changed the title CPS-???? | Approaches to call-by-need in UPLC CPS-0028? | Approaches to call-by-need in UPLC Apr 1, 2026
Copy link
Copy Markdown
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kozross this was discussed at the CIP meeting today where it was confirmed, as I suggested before, that the practicality of this problem statement has been established by GitHub review since it was originally introduced at an earlier meeting: so we are happy to confirm this as a candidate & editors will follow the technical discussions here until subject matter experts seem to agree that this is ready for a final review.

Please rename the containing directory to CPS-0028 and update the "Rendered" link in the original post 🎉

@rphair rphair added State: Confirmed Candiate with CIP number (new PR) or update under review. and removed State: Unconfirmed Triaged at meeting but not confirmed (or assigned CIP number) yet. labels Apr 1, 2026
@zliu41
Copy link
Copy Markdown
Contributor

zliu41 commented Apr 1, 2026

I'd like to see the motivation strengthened a bit more. I'll give it some thoughts in the next few days, and possibly dig up some old discussions.

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Apr 1, 2026

@rphair and @Ryun1 - updated the title, CPS number and directory.

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Apr 1, 2026

@zliu41 - it isn't clear how to do this in general. Let me give an example in context to illustrate why this is so difficult.

As part of current work on the Grumplestiltskin project, we have been considering elliptic curves over second-degree finite field extensions. These are even worse than the example given, as second-degree field extensions are pairs of finite field elements, which makes their operations (particularly multiplication and division) far more involved. In particular, they require yet more 'auxiliary values' (specifically a field irreducible), and each operation in particular corresponds to quite a large number of builtins.

As part of our work, we did a pedantic comparison of both direct and indirect representations, even given the issues described already. We have found that indirect representations are significantly better, but still have huge costs, caused by the precise problem I described before.

As to why this is hard to avoid, consider the case of elliptic curve point scaling. If we observe the implementation, we can see that the t argument (the point being scaled) is re-used every time our exponentiation-by-squaring encounters an odd exponent in the 'ladder', which in bad cases can be as frequently as log(n), where n is the scalar's value. This is done by 'calling out' to elliptic curve point addition, which has to happen, as one argument is unknown (and may easily be the point at infinity). How we would avoid repeated re-evaluation of the t argument in this context is not clear to me: you can't 'float out' the evaluation of the k1 continuation, as it must be used as a 'component' of the new continuation produced as a result of doubled #+ t'.

Now, I might be wrong in this particular case, and it could be possible to avoid this. However, I certainly can't think of how. Furthermore, this is a case that's both extremely natural in this situation, and likely to be extremely bad. If you see the scaling benchmarks, you can observe that the exunit and memory cost flies off the handle pretty quickly. Furthermore, the exponent in this context is tiny (64), whereas in practice, the required scalar will be a high-entropy random number which is far, far bigger. And this is just for verification - constructing the necessary bilinear pairing function would be many, many times worse than this.

Copy link
Copy Markdown
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kozross re: #1150 (comment) & earlier #1150 (review) the containing directory needs to be further renamed from CPS-28 to CPS-0028.

@kozross
Copy link
Copy Markdown
Contributor Author

kozross commented Apr 1, 2026

@rphair - done.

@zliu41
Copy link
Copy Markdown
Contributor

zliu41 commented Apr 2, 2026

Regarding motivation:

Memoization or tabulation algorithms

This is true for general programming but it's unclear whether any real validators, whose role is to validate transactions, require memoization or tabulation. Also, many such algorithms require a data structure that can store thunks and supports efficient indexing, which UPLC doesn't have (builtin Array doesn't fit because it cannot store thunks).

Easier to write Haskell-like code

I like this motivation personally but it's quite specific to Plinth.

Boehm-Berrarducci encodings and escape continuations

This is a useful example, but it's a bit too complex, with many details that may not be relevant. It would be better to extract a general argument, and use this example as concrete evidence.

With that said, I do think there's a decent motivation: I don't think laziness is strictly necessary to achieve low cost/size, but there's a good argument that it makes things more convenient. How about the following:

1. Convenience. To reuse the example I mentioned above: suppose there is a large and expensive expression that is used in two of the three branches, and in one of the branches it is used twice. In this case, you can indeed avoid duplicating code or work without laziness:

     let x = delay expensive in
     case scrut of
       A -> ... force x ...
       B -> let y = force x in ... y ... y ...
       C -> ...

but this is cumbersome and not ergonomic, not to mention that some surface languages may not even have delay/force constructs, making it impossible to write this code (I think this is the case with Aiken, but I'm not certain). Users would much rather write something like this (not just in Plinth but in any surface language):

     let x = lazy expensive in
     case scrut of
       A -> ... x ...
       B -> ... x ... x ...
       C -> ...

But if there's no laziness in UPLC, the burden would be on the compiler to optimize the latter into the former. I don't think any current compiler does this, and indeed, doing so is non-trivial (it would require a number of analyses, some of which may not be decidable) and may not preserve semantics. With lazy delay/force this becomes much easier.

I think the BB encoding example is a specific, concrete example of this argument.

2. Code reuse. Here's a blog post by Augustsson that @effectfully pointed to me before, which argues that "Strict evaluation is fundamentally flawed for function reuse". Although validators are relatively small programs, code reuse is still very much relevant: the validation logic can frequently be expressed as a composition of list filtering, mapping, small predicates etc. Manually inlining or fusing things makes validators harder to audit and more prone to bugs.

A number of general-purpose strict languages also support limited form of laziness (e.g., Scala and OCaml), likely due to reasons similar to 1 and 2. In particular, here's an example used by "Functional Programming in Scala" to motivate lazy lists:

List(1, 2, 3, 4).map(_ + 10).filter(_ % 2 == 0).map(_ * 3)

3. Some related discussions/comments in the Cardano community.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Category: Plutus Proposals belonging to the 'Plutus' category. State: Confirmed Candiate with CIP number (new PR) or update under review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants