Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LV2: Memory state post-exitspawn fixes #12001

Merged
merged 1 commit into from
May 13, 2022
Merged

Conversation

elad335
Copy link
Contributor

@elad335 elad335 commented May 12, 2022

  • Fix memory capacity if SDK version of the following executable differs from the original process'.
  • Keep user memory containers, they are not freed at exitspawn!

Hw test elad335/myps3tests@4bf6002

@@ -415,6 +415,27 @@ void _sys_process_exit2(ppu_thread& ppu, s32 status, vm::ptr<sys_exit2_param> ar
, hdd1 = std::move(hdd1), klic = g_fxo->get<loaded_npdrm_keys>().last_key(), old_config = Emu.GetUsedConfig()]() mutable
{
sys_process.success("Process finished -> %s", argv[0]);

Emu.init_mem_containers = [old_size = g_fxo->get<lv2_memory_container>().size, vec = g_fxo->get<id_manager::id_map<lv2_memory_container>>().vec](u32 sdk_suggested_mem) mutable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing std::move?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is before Kill, the emulation threads still need IDM state to remain usable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot a reader lock though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@Nekotekina
Copy link
Member

I wonder what's the point of inheriting containers. Maybe spawned process actually inherits the shared memory data (not mapped).

* Fix memory capacity if SDK version of the following executable differs from the original process'.
* Keep user memory containers, they are not freed at exitspawn!

Hw test elad335/myps3tests@4bf6002
@elad335 elad335 requested a review from Nekotekina May 13, 2022 06:20
@Nekotekina Nekotekina merged commit 524da5d into RPCS3:master May 13, 2022
@Linear524
Copy link

I don't know if this PR done this, but GoW3 now crashes after about 2~3 minutes of gameplay with this error:
·W 0:02:21.842035 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.844852 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x03530]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.872732 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.902638 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x023ec]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.932620 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.961316 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:21.992612 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:22.024029 {PPU[0x1000000] Thread (main_thread) [0x002308e0]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·W 0:02:22.053881 {SPU[0x0000200] Thread (LittleCellSpursKernel0) [0x034fc]} RSX: Cache miss at address 0x30A70000. This is gonna hurt...
·E 0:02:22.398231 {RSX [0x03d4e40]} RSX: FIFO error: possible desync event (last cmd = 0x3f7380dd)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants