Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

32016 Co Pro: Overruns / Lates seen in (GPU branch only) #6

Closed
hoglet67 opened this issue Oct 18, 2016 · 3 comments
Closed

32016 Co Pro: Overruns / Lates seen in (GPU branch only) #6

hoglet67 opened this issue Oct 18, 2016 · 3 comments

Comments

@hoglet67
Copy link
Owner

hoglet67 commented Oct 18, 2016

During "cc cworld" which then hangs.

tube reset - copro 13
OVERRUN: A=5; D=61; RNW=1; NTUBE=1; nRST=1
LATE: A=5; D=61; RNW=1; NTUBE=1; nRST=1
OVERRUN: A=7; D=C4; RNW=0; NTUBE=0; nRST=1
cycle counter = 40471883776
I_CACHE_MISS = 928427899
D_CACHE_MISS = 1999809
tube reset - copro 13

Does not occur on current master branch.

Suggests maybe the emulation is significantly slower on the GPU branch?
Not the case... (Bas32/CLOCKSP: Master: 19.80MHz, GPU: 24.95MHz)

@hoglet67 hoglet67 changed the title Overruns / Lates seen in (GPU branch only) 32016 Co Pro: Overruns / Lates seen in (GPU branch only) Oct 18, 2016
@hoglet67
Copy link
Owner Author

This issue is rather perplexing, as the master branch is very stable (even though it shouldn't be, because of the worse read latency).

An OVERRUN means the GPU ISR handler another bus request before the ATTN bit of the previous message was seen to be cleared.

The A=5 in the OVERRUN with RNW=1 means this is a read of tube data FIFO R3 (used for bulk data transfers). So this is definitely messing up a data transfer.

The mailbox is currently in memory marked as shared, inner (L1) uncached, outer (L2) cached.

We should be able to blip a spare GPIO when this arises, and capture some logic analyzer traces.

Another experiment would be to move tube_mailbox back to L1 cached memory, and explicitly clean/invalidate, and see if this makes it worse or better.

@hoglet67
Copy link
Owner Author

Frustratingly, I do not seem able to reproduce this issue now.

It previously happened on three successive CC runs.

@hoglet67
Copy link
Owner Author

Closing as a cannot reproduce this on the latest code, even an 900/350 clock rates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant