Skip to content

[pull] main from facebook:main #177

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 159 commits into from
Jul 15, 2025
Merged

[pull] main from facebook:main #177

merged 159 commits into from
Jul 15, 2025

Conversation

pull[bot]
Copy link

@pull pull bot commented Jun 6, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

unstubbable and others added 2 commits June 5, 2025 15:08
When I added the `ready_for_review` event in #32344, no notifications
for opened draft PRs were sent due to some other condition. This is not
the case anymore, so we need to exclude draft PRs from triggering a
notification when the workflow is run because of an `opened` event. This
event is still needed because the `ready_for_review` event only fires
when an existing draft PR is converted to a non-draft state. It does not
trigger for pull requests that are opened directly as ready-for-review.
## Summary

Enables the `enableEagerAlternateStateNodeCleanup` feature flag for all
variants, while maintaining the `__VARIANT__` for the internal React
Native flavor for backtesting reasons.

## How did you test this change?

```
$ yarn test
```
@pull pull bot added the ⤵️ pull label Jun 6, 2025
unstubbable and others added 27 commits June 6, 2025 06:42
This shouldn't be needed now that the lint rule was move
Block the view transition on suspensey images Up to 500ms just like the
client.

We can't use `decode()` because a bug in Chrome where those are blocked
on `startViewTransition` finishing we instead rely on sync decoding but
also that the image is live when it's animating in and we assume it
doesn't start visible.

However, we can block the View Transition from starting on the `"load"`
or `"error"` events.

The nice thing about blocking inside `startViewTransition` is that we
have already done the layout so we can only wait on images that are
within the viewport at this point. We might want to do that in Fiber
too. If many image doesn't have fixed size but need to load first, they
can all end up in the viewport. We might consider only doing this for
images that have a fixed size or only a max number that doesn't have a
fixed size.
…undaries (#33454)

We want to make sure that we can block the reveal of a well designed
complete shell reliably. In the Suspense model, client transitions don't
have any way to implicitly resolve. This means you need to use Suspense
or SuspenseList to explicitly split the document. Relying on implicit
would mean you can't add a Suspense boundary later where needed. So we
highly encourage the use of them around large content.

However, if you have constructed a too large shell (e.g. by not adding
any Suspense boundaries at all) then that might take too long to render
on the client. We shouldn't punish users (or overzealous metrics
tracking tools like search engines) in that scenario.

This opts out of render blocking if the shell ends up too large to be
intentional and too slow to load. Instead it deopts to showing the
content split up in arbitrary ways (browser default). It only does this
for SSR, and not client navs so it's not reliable.

In fact, we issue an error to `onError`. This error is recoverable in
that the document is still produced. It's up to your framework to decide
if this errors the build or just surface it for action later.

What should be the limit though? There's a trade off here. If this limit
is too low then you can't fit a reasonably well built UI within it
without getting errors. If it's too high then things that accidentally
fall below it might take too long to load.

I came up with 512kB of uncompressed shell HTML. See the comment in code
for the rationale for this number. TL;DR: Data and theory indicates that
having this much content inside `rel="expect"` doesn't meaningfully
change metrics. Research of above-the-fold content on various websites
indicate that this can comfortable fit all of them which should be
enough for any intentional initial paint.
)

Follow up to #33442. This is the bundled version.

To keep type check passes from exploding and the maintainance of the
annoying `paths: []` list small, this doesn't add this to flow type
checks. We might miss some config but every combination should already
be covered by other one passes.

I also don't add any jest tests because to test these double export
entry points we need conditional importing to cover builds and
non-builds which turns out to be difficult for the Flight builds so
these aren't covered by any basic build tests.

This approach is what I'm going for, for the other bundlers too.
Adding throttling or delaying on images, can obviously impact metrics.
However, it's all in the name of better actual user experience overall.
(Note that it's not strictly worse even for metric. Often it's actually
strictly better due to less work being done overall thanks to batching.)

Metrics can impact things like search ranking but I believe this is on a
curve. If you're already pretty good, then a slight delay won't suddenly
make you rank in a completely different category. Similarly, if you're
already pretty bad then a slight delay won't make it suddenly way worse.
It's still in the same realm. It's just one weight of many. I don't
think this will make a meaningful practical impact and if it does,
that's probably a bug in the weights that will get fixed.

However, because there's a race to try to "make everything green" in
terms of web vitals, if you go from green to yellow only because of some
throttling or suspensey images, it can feel bad. Therefore this
implements a heuristic where if the only reason we'd miss a specific
target is because of throttling or suspensey images, then we shorten the
timeout to hit the metric. This is a worse user experience because it
can lead to extra flashing but feeling good about "green" matters too.

If you then have another reveal that happens to be the largest
contentful paint after that, then that's throttled again so that it
doesn't become flashy after that. If you've already missed the deadline
then you're not going to hit your metric target anyway. It can affect
average but not median.

This is mainly about LCP. It doesn't affect FCP since that doesn't have
a throttle. If your LCP is the same as your FCP then it also doesn't
matter.

We assume that `performance.now()`'s zero point starts at the "start of
the navigation" which makes this simple. Even if we used the
`PerformanceNavigationTiming` API it would just tell us the same thing.

This only implements for Fizz since these metrics tend to currently only
by tracked for initial loads, but with soft navs tracking we could
consider implementing the same for Fiber throttles.
When deeply nested Suspense boundaries inside a fallback of another
boundary resolve it is possible to encounter situations where you either
attempt to flush an aborted Segment or you have a boundary without any
root segment. We intended for both of these conditions to be impossible
to arrive at legitimately however it turns out in this situation you
can. The fix is two-fold

1. allow flushing aborted segments by simply skipping them. This does
remove some protection against future misconfiguraiton of React because
it is no longer an invariant that you hsould never attempt to flush an
aborted segment but there are legitimate cases where this can come up
and simply omitting the segment is fine b/c we know that the user will
never observe this. A semantically better solution would be to avoid
flushing boudaries inside an unneeded fallback but to do this we would
need to track all boundaries inside a fallback or create back pointers
which add to memory overhead and possibly make GC harder to do
efficiently. By flushing extra we're maintaining status quo and only
suffer in performance not with broken semantics.

2. when queuing completed segments allow for queueing aborted segments
and if we are eliding the enqueued segment allow for child segments that
are errored to be enqueued too. This will mean that we can maintain the
invariant that a boundary must have a root segment the first time we
flush it, it just might be aborted (see point 1 above).

This change has two seemingly similar test cases to exercise this fix.
The reason we need both is that when you have empty segments you hit
different code paths within Fizz and so each one (without this fix)
triggers a different error pathway.

This change also includes a fix to our tests where we were not
appropriately setting CSPnonce back to null at the start of each test so
in some contexts scripts would not run for some tests
Reverts #33457, #33456 and #33442.

There are too many issues with wrappers, lazy init, stateful modules,
duplicate instantiation of async_hooks and duplication of code.

Instead, we'll just do a wrapper polyfill that uses Node Streams
internally.

I kept the client indirection files that I added for consistency with
the server though.
This effectively lets us consume Web Streams in a Node build. In fact
the Node entry point is now just adding Node stream APIs.

For the client, this is simple because the configs are not actually
stream type specific. The server is a little trickier.
New take on #33441.

This uses a wrapper instead of a separate bundle.
This needs some tweaks to the implementation and a conversion but simple
enough.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
…#33478)

I noticed that the ThirdPartyComponent in the fixture was showing the
wrong stack and the `"use third-party"` is in the wrong ___location.

<img width="628" alt="Screenshot 2025-06-06 at 11 22 11 PM"
src="https://github.com/user-attachments/assets/f0013380-d79e-4765-b371-87fd61b3056b"
/>

When creating the initial JSX inside the third party server, we should
make sure that it has no owner. In a real cross-server environment you
get this by default by just executing in different context. But since
the fixture example is inside the same AsyncLocalStorage as the parent
it already has an owner which gets transferred. So we should make sure
that were we create the JSX has no owner to simulate this.

When we then parse a null owner on the receiving side, we replace its
owner/stack with the owner/stack of the call to `createFrom...` to
connect them. This worked fine with only two environments. The bug was
that when we did this and then transferred the result to a third
environment we took the original parsed stack trace. We should instead
parse a new one from the replaced stack in the current environment.

The second bug was that the `"use third-party"` badge ends up in the
wrong place when we do this kind of thing. Because the stack of the
thing entering the new environment is the call to `createFrom...` which
is in the old environment even though the component itself executes in
the new environment. So to see if there's a change we should be
comparing the current environment of the task to the owner's environment
instead of the next environment after the task.

After:

<img width="494" alt="Screenshot 2025-06-07 at 1 13 28 AM"
src="https://github.com/user-attachments/assets/e2e870ba-f125-4526-a853-bd29f164cf09"
/>
Technically the async call graph spans basically all the way back to the
start of the app potentially, but we don't want to include everything.
Similarly we don't want to include everything from previous components
in every child component. So we need some heuristics for filtering out
data.

We roughly want to be able to inspect is what might contribute to a
Suspense loading sequence even if it didn't this time e.g. due to a race
condition.

One flaw with the previous approach was that awaiting a cached promise
in a sibling that happened to finish after another sibling would be
excluded. However, in a different race condition that might end up being
used so I wanted to include an empty "await" in that scenario to have
some association from that component.

However, for data that resolved fully before the request even started,
it's a little different. This can be things that are part of the start
up sequence of the app or externally cached data. We decided that this
should be excluded because it doesn't contribute to the loading sequence
in the expected scenario. I.e. if it's cached. Things that end up being
cache misses would still be included. If you want to test externally
cached data misses, then it's up to you or the framework to simulate
those. E.g. by dropping the cache. This also helps free up some noise
since static / cached data can be excluded in visualizations.

I also apply this principle to forwarding debug info. If you reuse a
cached RSC payload, then the Server Component render time and its awaits
gets clamped to the caller as if it has zero render/await time. The I/O
entry is still back dated but if it was fully resolved before we started
then it's completely excluded.
…allow processing function props (#32119)

## Summary

In react-native props that are passed as function get converted to a
boolean (`true`). This is the default pattern for event handlers in
react-native.
However, there are reasons for why you might want to opt-out of this
behavior, and instead, pass along the actual function as the prop.
Right now, there is no way to do this, and props that are functions
always get set to `true`.
The `ViewConfig` attributes already have the API for a `process`
function. I simply moved the check for the process function up, so if a
ViewConfig's prop attribute configured a process function this is always
called first.
This provides an API to opt out of the default behavior. 

This is the accompanied PR for react-native:

- facebook/react-native#48777

## How did you test this change?

<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
  If you leave this empty, your PR will very likely be closed.
-->

I modified the code manually in a template react-native app and
confirmed its working. This is a code path you only need in very special
cases, thus it's a bit hard to provide a test for this. I recorded a
video where you can see that the changes are active and the prop is
being passed as native value.

For this I created a custom native component with a view config that
looked like this:

```js
const viewConfig = {
  uiViewClassName: 'CustomView',
  bubblingEventTypes: {},
  directEventTypes: {},
  validAttributes: {
    nativeProp: {
      process: (nativeProp) => {
		// Identity function that simply returns the prop function callback
        // to opt out of this prop being set to `true` as its a function
        return nativeProp
      },
    },
  },
}
```



https://github.com/user-attachments/assets/493534b2-a508-4142-a760-0b1b24419e19

Additionally I made sure that this doesn't conflict with any existing
view configs in react native. In general, this shouldn't be a breaking
change, as for existing view configs it didn't made a difference if you
simply set `myProp: true` or `myProp: { process: () => {...} }` because
as soon as it was detected that the prop is a function the config
wouldn't be used (which is what this PR fixes).
Probably everyone, including the react-native core components use
`myProp: true` for callback props, so this change should be fine.
The prettier check for this file is currently failing on `main`, after
#32119 was merged.
Summary: useEffectEvent values are not meant to be added to the dep
array
This adds some I/O to go get the third party thing to test how it
overlaps.

With #33482, this is what it looks like. The await gets cut off when the
third party component starts rendering. I.e. after the latency to start.

<img width="735" alt="Screenshot 2025-06-08 at 5 42 46 PM"
src="https://github.com/user-attachments/assets/f68d9a84-05a1-4125-b3f0-8f3e4eaaa5c1"
/>

This doesn't fully simulate everything because it should actually also
simulate each chunk of the stream coming back too. We could wrap the
ReadableStream to simulate that. In that scenario, it would probably get
some awaits on the chunks at the end too.
…paces (#33409)

## Summary

Problem #1: Running the `link-compiler.sh` bash script via `"prebuild"`
script fails if a developer has cloned the `react` repo into a folder
that contains _any_ spaces. 3 tests fail because of this.

<img width="1003" alt="fail-1"
src="https://github.com/user-attachments/assets/1fbfa9ce-4f84-48d7-b49c-b6e967b8c7ca"
/>
<img width="1011" alt="fail-2"
src="https://github.com/user-attachments/assets/0a8c6371-a2df-4276-af98-38f4784cf0da"
/>
<img width="1027" alt="fail-3"
src="https://github.com/user-attachments/assets/1c4f4429-800c-4b44-b3da-a59ac85a16b9"
/>

For example, my current folder is:
`/Users/wes/Development/Open Source Contributions/react`

The link compiler error returns:
`./scripts/react-compiler/link-compiler.sh: line 15: cd:
/Users/wes/Development/Open: No such file or directory`

Problem #2: 1 test in `ReactChildren-test.js` fails due the existing
stack trace regex which should be lightly revised.

`([^(\[\n]+)[^\n]*/g` is more robust for stack traces: it captures the
function/class name (with dots) and does not break on spaces in file
paths.
`([\S]+)[^\n]*/g` is simpler but breaks if there are spaces and doesn't
handle dotted names well.

Additionally, we trim the whitespace off the name to resolve extra
spaces breaking this test as well:

```
-     in div (at **)
+     in div  (at **)
```

<img width="987" alt="fail-4"
src="https://github.com/user-attachments/assets/56a673bc-513f-4458-95b2-224129c77144"
/>

All of the above tests pass if I hyphenate my local folder:
`/Users/wes/Development/Open-Source-Contributions/react`

I selfishly want to keep spaces in my folder names. 🫣

## How did you test this change?

**npx yarn prebuild**

Before:
<img width="896" alt="Screenshot at Jun 01 11-42-56"
src="https://github.com/user-attachments/assets/4692775c-1e5c-4851-9bd7-e12ed5455e47"
/>

After:
<img width="420" alt="Screenshot at Jun 01 11-43-42"
src="https://github.com/user-attachments/assets/4e303c00-02b7-4540-ba19-927b2d7034fb"
/>

**npx yarn test**
**npx yarn test
./packages/react/src/\_\_tests\_\_/ReactChildren-test.js**
**npx yarn test -r=xplat --env=development --variant=true --ci
--shard=3/5**

Before:
<img width="438" alt="before"
src="https://github.com/user-attachments/assets/f5eedb22-18c3-4124-a04b-daa95c0f7652"
/>

After:
<img width="439" alt="after"
src="https://github.com/user-attachments/assets/a94218ba-7c6a-4f08-85d3-57540e9d0029"
/>

<img width="650" alt="Screenshot at Jun 02 18-03-39"
src="https://github.com/user-attachments/assets/3eae993c-a56b-46c8-ae02-d249cb053fe7"
/>

<img width="685" alt="Screenshot at Jun 03 12-53-47"
src="https://github.com/user-attachments/assets/5b2caa33-d3dc-4804-981d-52cb10b6226f"
/>
… in shadow dom scnenario (#33487)

## Summary

Minor changes around css and styling of Settings dialog.

1. `:root` selector was updated to `:is(:root, :host)` to make css
variables available on Shadow Root
2. CSS tweaks around Settings dialog: removed references to deleted
styles, removed unused styles, ironed out styling for cases when input
styles are enhanced by user agent stylesheet

<!--
Explain the **motivation** for making this change. What existing problem
does the pull request solve?
-->

## How did you test this change?

| Before | After |
|--------|--------|
| ![Screenshot 2025-06-09 at 15 35
55](https://github.com/user-attachments/assets/1ac5d002-744b-4b10-9501-d4f2a7c827d2)
| ![Screenshot 2025-06-09 at 15 26
12](https://github.com/user-attachments/assets/8cc07cda-99a5-4930-973b-b139b193e349)
|
| ![Screenshot 2025-06-09 at 15 36
02](https://github.com/user-attachments/assets/1af4257c-928d-4ec6-a614-801cc1936f4b)
| ![Screenshot 2025-06-09 at 15 26
25](https://github.com/user-attachments/assets/7a3a0f7c-5f3d-4567-a782-dd37368a15ae)
|
| ![Screenshot 2025-06-09 at 15 36
05](https://github.com/user-attachments/assets/a1e00381-2901-4e22-b1c6-4a3f66ba78c9)
| ![Screenshot 2025-06-09 at 15 26
30](https://github.com/user-attachments/assets/bdefce68-cbb5-4b88-b44c-a74f28533f7d)
|
| ![Screenshot 2025-06-09 at 15 36
12](https://github.com/user-attachments/assets/4eda6234-0ef0-40ca-ad9d-5990a2b1e8b4)
| ![Screenshot 2025-06-09 at 15 26
37](https://github.com/user-attachments/assets/5cac305e-fd29-460c-b0b8-30e477b8c26e)
|
Includes #31412.

The issue is that `pushTreeFork` stores some global state when reconcile
children. This gets popped by `popTreeContext` in `completeWork`.
Normally `completeWork` returns its own `Fiber` again if it wants to do
a second pass which will call `pushTreeFork` again in the next pass.
However, `SuspenseList` doesn't return itself, it returns the next child
to work on.

The fix is to keep track of the count and push it again it when we
return the next child to attempt.

There are still some outstanding issues with hydration. Like the
backwards test still has the wrong behavior in it because it hydrates
backwards and so it picks up the DOM nodes in reverse order.
`tail="hidden"` also doesn't work correctly.

There's also another issue with `useId` and `AsyncIterable` in
SuspenseList when there's an unknown number of children. We don't
support those showing one at a time yet though so it's not an issue yet.
To fix it we need to add variable total count to the `useId` algorithm.
E.g. by falling back to varint encoding.

---------

Co-authored-by: Rick Hanlon <[email protected]>
Co-authored-by: Ricky <[email protected]>
#33482)

Previously you weren't guaranteed to have only advancing time entries,
you could jump back in time, but now it omits unnecessary duplicates and
clamps automatically if you emit a previous time entry to enforce
forwards order only.

The reason I didn't do this originally is because `await` can jump in
the order because we're trying to encode a graph into a flat timeline
for simplicity of the protocol and consumers.

```js
async function a() {
  await fetch1();
  await fetch2();
}

async function b() {
  await fetch3();
}

async function foo() {
  const p = a();
  await b();
  return p;
}
```

This can effectively create two parallel sequences:

```
--1.................----2.......--
------3......---------------------
```

This can now be flattened to either:

```
--1.................3---2.......--
```

Or:

```
------3......1......----2.......--
```

Depending on which one we visit first. Regardless, information is lost.

I'd say that the second one is worse encoding of this scenario because
it pretends that we weren't waiting for part of the timespan that we
were. To solve this I think we should probably make `emitAsyncSequence`
create a temporary flat list and then sort it by start time before
emitting.

Although we weren't actually blocked since there was some CPU time that
was able to proceed to get to 3. So maybe the second one is actually
better. If we wanted that consistently we'd have to figure out what the
intersection was.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
sebmarkbage and others added 29 commits July 7, 2025 11:42
Same as #33716 but without the separate close signal.

We'll need the ref count for separate debug channel anyway but I'm not
sure we'll need the separate close signal.
…33719)

Stacked on #33718. Alternative to #33716.

The issue with flushing the Server Components track in its current form
is that we need to decide how long to wait before flushing whatever we
have. That's because the root's end time will be determined by the end
time of that last child.

However, if a child isn't actually used then we don't necessarily need
to include it in the Server Components track since it wasn't blocking
the initial render.

This waits for 100ms after the last pending chunk is resolved and if
nothing is invoking any more lazy initializers after that then we log
the Server Components track with the information we have at that point.
We also don't eagerly initialize any chunks that wasn't already
initialized so if nothing was rendered, then nothing will be logged.

This is somewhat an artifact of the current visualization. If we did
another transposed form we wouldn't necessarily need to wait until the
end and can log things as they're discovered.
Content in Suspense fallbacks are really not considered part of the
Suspense but since it does have some behavior it should be marked
somehow separately from the Suspense content.

A follow up would be to do the same in Fiber.
This is rolled out to 100%.

Let me merge it though.
Good to assert these include the component stack
We need to optimize the collection of debug info for dev mode. This is
an incredibly hot path since it instruments all I/O and Promises in the
app.

These optimizations focus primarily on the collection of stack traces.
They are expensive to collect because we need to eagerly collect the
stacks since they can otherwise cause memory leaks. We also need to do
some of the processing of them up front. We also end up only using a few
of them in the end but we don't know which ones we'll use.

The first compromise here is that I now only collect the stacks of
"awaits" if they were in a specific request's render. In some cases it's
useful to collect them even outside of this if they're part of a
sequence that started early. I still collect stacks for the created
Promises outside of this though which can still provide some context.

The other optimization to awaits, is that since we'll only use the inner
most one that had an await directly in userspace, we can stop collecting
stacks on a chain of awaits after we find one. This requires a quick
filter on a single callsite to determine. Since we now only collect
stacks from awaits that belongs to a specific Request we can use that
request's specific filter option. Technically this might not be quite
correct if that same thing ends up deduped across Requests but that's an
edge case.

Additionally, I now stop collecting stack for I/O nodes. They're almost
always superseded by the Promise that wraps them anyway. Even if you
write mostly Promise free code, you'll likely end up with a Promise at
the root of the component eventually anyway and then you end up using
its stack anyway. You have to really contort the code to end up with
zero Promises at which point it's not very useful anyway. At best it's
maybe mostly useful for giving a name to the I/O when the rest is just
stuff like `new Promise`.

However, a possible alternative optimization could be to *only* collect
the stack of spawned I/O and not the stack of Promises. The issue with
Promises (not awaits) is that we never know what will end up resolving
them in the end when they're created so we have to always eagerly
collect stacks. This could be an issue when you have a lot of
abstractions that end up not actually be related to I/O at all. The
issue with collecting stacks only for I/O is that the actual I/O can be
pooled or batched so you end up not having the stack when the conceptual
start of each operation within the batch started. Which is why I decided
to keep the Promise stack.
When a debug channel is available, we now allow objects to be lazily
requested though the debug channel and only then will the server send
it.

The client will actually eagerly ask for the next level of objects once
it parses its payload. That way those objects have likely loaded by the
time you actually expand that deep e.g. in the console repl. This is
needed since the console repl is synchronous when you ask it to invoke
getters.

Each level is lazily parsed which means that we don't parse the next
level even though we eagerly loaded it. We parse it once the getter is
invoked (in Chrome DevTools you have to click a little `(...)` to invoke
the getter). When the getter is invoked, the chunk is initialized and
parsed. This then causes the next level to be asked for through the
debug channel. Ensuring that if you expand one more level you can do so
synchronously.

Currently debug chunks are eagerly parsed, which means that if you have
things like server component props that are lazy they can end up being
immediately asked for, but I'm trying to move to make the debug chunks
lazy.
We unnecessarily render the preamble in a task. This updates the
implementation to perform this render inline.

Testing this is tricky because one of the only ways you could assert
this was even happening is based on how things error if you abort while
rendering the root.

While adding a test for this I discovered that not all abortable tasks
report errors when aborted during a normal render. I've asserted the
current behavior and will address the other issue at another time and
updated the assertion later as necessary
Because the object limit is unfortunately depth first due to limitations
of JSON stringify, we need to ensure that things we really don't want
outlined are first in the enumeration order.

We add the stack length to the object limit to ensure that the stack
frames aren't outlined. In console all the user space arguments are at
the end of the args. In server component props, the props are at the end
of the properties of the element.

For the `value` of I/O we had it before the stack so it could steal the
limit from the stack. The fix is to put it at the end.
This is a compromise because there can be a lot of Promise instances
created. They're useful because they generally provide a better stack
when batching/pooled connections are used.

This restores stack collection for I/O nodes so we have something to
fallback on if there's no owner.

That way we can at least get a name or something out of I/O that was
spawned outside a render but mostly avoids collecting starting I/O
outside of render.
This is an optimized version of @asmjmp0's fix in
#31940. When we merge consecutive
blocks we need to take care to rewrite later phis whose operands will
now be different blocks due to merging. Rather than iterate all the
blocks on each merge as in #31940, we can do a single iteration over all
the phis at the end to fix them up.

Note: this is a redo of #31959

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33725).
* #33726
* __->__ #33725
We currently inline IIFEs by creating a temporary and a labeled block w
the original code. The original return statements turn into an
assignment to the temporary and break out of the label. However, many
cases of IIFEs are due to inlining of manual `useMemo()`, and these
cases often have only a single return statement. Here, the output is
cleaner if we avoid the temporary and label - so that's what we do in
this PR.

Note that the most complex part of the change is actually around
ValidatePreserveExistingMemo - we have some logic to track the IIFE
temporary reassignmetns which needs to be updated to handle the simpler
version of inlining.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33726).
* __->__ #33726
* #33725
If we're about to defer an object, then we shouldn't store a reference
to it because then we can end up deduping by referring to the deferred
string. If in a different context, we should still be able to emit the
object.
…string stack (#33735)

When we know that the object that we pass in is immediately parsed, then
we know it couldn't have been reified into a unstructured stack yet. In
this path we assume that we'll trigger `Error.prepareStackTrace`.

Since we know that nobody else will read the stack after us, we can skip
generating a string stack and just return empty. We can also skip
caching.
We don't really need to retain a reference to whatever Promise another
Promise was created in. Only awaits need to retain both their trigger
and their previous context.
…created from user space (#33739)

We use the stack of a Promise as the start of the I/O instead of the
actual I/O since that can symbolize the start of the operation even if
the actual I/O is batched, deduped or pooled. It can also group multiple
I/O operations into one.

We want the deepest possible Promise since otherwise it would just be
the Component's Promise.

However, we don't really need deeper than the boundary between first
party and third party. We can't just take the outer most that has third
party things on the stack though because third party can have callbacks
into first party and then we want the inner one. So we take the inner
most Promise that depends on I/O that has a first party stack on it.

The realization is that for the purposes of determining whether we have
a first party stack we need to ignore async stack frames. They can
appear on the stack when we resume third party code inside a resumption
frame of a first party stack.

<img width="832" alt="Screenshot 2025-07-08 at 6 34 25 PM"
src="https://github.com/user-attachments/assets/1636f980-be4c-4340-ad49-8d2b31953436"
/>

---------

Co-authored-by: Sebastian Sebbie Silbermann <[email protected]>
…33742)

If we have the ability to lazy load Promise values, i.e. if we have a
debug channel, then we should always use it for Promises that aren't
already resolved and instrumented.

There's little downside to this since they're async anyway.

This also lets us avoid adding `.then()` listeners too early. E.g. if
adding the listener would have side-effect. This avoids covering up
"unhandled rejection" errors. Since if we listen to a promise eagerly,
including reject listeners, we'd have marked that Promise's rejection as
handled where as maybe it wouldn't have been otherwise.

In this mode we can also indefinitely wait for the Promise to resolve
instead of just waiting a microtask for it to resolve.
Follow up to #33736.

If we need to save on CPU/memory pressure, we can instead just pray and
hope that a Promise doesn't get garbage collected before we need to read
it.

This can cause fragile access to the Promise value in devtools
especially if it's a slow and pressured render.

Basically, you'd have to hope that GC doesn't run after the inner await
finishes its microtask callback and before the resolution of the
component being rendered is invoked.
…listeners (#33743)

This is the same as we do for currently rendering tasks. They get
effectively sync aborted when the listener is invoked.

We potentially miss out on some debug info in that case but that would
only apply to any entries inside the stream which doesn't really have
their own debug info anyway.
In playground it's helpful to show all errors, even those that don't
completely abort compilation. For example, to help demonstrate that the
compiler catches things like setState in effects. This detects these
errors and ensures we show them.
)

* Error for `eval()`
* More specific error message for `with (expr) { ... }` syntax
* More specific error message for class declarations

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33746).
* #33752
* #33751
* #33750
* #33748
* #33747
* __->__ #33746
Supports inline enum declarations in both Flow and TS by treating the
node as pass-through (enums can't capture values mutably). Related, this
PR extends the set of type-related declarations that we ignore.
Previously we threw a todo for things like DeclareClass or
DeclareVariable, but these are type related and can simply be dropped
just like we dropped TypeAlias.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33747).
* #33753
* #33752
* #33751
* #33750
* #33748
* __->__ #33747
…atements (#33748)

import, export, and TS namespace statements can only be used at the
top-level of a module, which is enforced by parsers already. Here we add
a backup validation of that. As of this PR, we now have only major
statement type (class declarations) listed as a todo.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33748).
* #33753
* #33752
* #33751
* #33750
* __->__ #33748
We use this variant for syntax we intentionally don't support: with
statements, eval, and inline class declarations.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33750).
* #33753
* #33752
* #33751
* __->__ #33750
* #33748
When postponing the root we encode the segment Id into the postponed
state but we should really be reseting it to zero so we can restart the
counter from the beginning when the resume is actually just a re-render.

This also no longer assigns the root segment id based on the postponed
state when resuming the root for the same reason. In the future we may
use the embedded replay segment id if we implement resuming the root
without re-rendering everything but that is not yet implemented or
planned.
We typically treat an empty message as closing the debug channel stream
but for the Noop renderer we don't use an intermediate stream but just
pass the message through.


https://github.com/facebook/react/blob/bbc13fa17be8eebef3e6ee47f48c76c0c44e2f36/packages/react-server-dom-webpack/src/client/ReactFlightDOMClientBrowser.js#L59-L60

For that simple case we should just treat it as a close without an
intermediate stream.
This lets us pass a writable on the server side and readable on the
client side to send debug info through a separate channel so that it
doesn't interfere with the main payload as much. The main payload refers
to chunks defined in the debug info which means it's still blocked on it
though. This ensures that the debug data has loaded by the time the
value is rendered so that the next step can forward the data.

This will be a bit fragile to race conditions until #33665 lands.
Another follow up needed is the ability to skip the debug channel on the
receiving side. Right now it'll block forever if you don't provide one
since we're blocking on the debug data.
…osures (#33544)

Summary:

useEffectEvent is meant to be used specifically in combination with
useEffect, and using
the feature in arbitrary closures can lead to surprising reactivity
semantics. In order to
minimize risk in the experimental rollout, we are going to restrict its
usage to being
called directly inside an effect or another useEffectEvent, effectively
enforcing the function
coloring statically. Without an effect system this is the best we can
do.
@pull pull bot merged commit 97cdd5d into code:main Jul 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.