Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[directfd] Add track cache debug code to directfd driver #2081

Merged
merged 4 commits into from
Oct 17, 2024
Merged

[directfd] Add track cache debug code to directfd driver #2081

merged 4 commits into from
Oct 17, 2024

Conversation

ghaerr
Copy link
Owner

@ghaerr ghaerr commented Oct 17, 2024

As discussed in Mellvik/TLVC#88 (comment), this tooling is show track cache performance when copying files between floppy drives.

Fixes num_sector calculation in DF driver resulting in previously incorrect calculation of emulated delay when IODELAY.
Adds drive number to TR/RD/WR debug display.
Displays request queue size when DEBUG_CACHE and queue length > 1.
Adds debug=N option to /bootopts allowing for multilevel debug display.
Adds debug_cache2 when debug=2 for CH/BM/L1/L2 track cache display.
Removes auto-probe message in DF driver when not actually probing.

During cache testing between drives, it was found that full track cache switching between drives is not causing any noticeable performance delays, contrary to what I had thought. What is happening is that during a file copy, the system track reads, quickly gaining access to contiguous file sectors, while buffer writes end up going into the kernel L2 system buffers, with no I/O scheduled at all. So no track cache switching to speak of. After the system buffers become full, sync_buffers() is called which schedules the write I/O. When the async DF driver is in use, this ends up initially quickly queuing 9 request headers, but which apparently are quickly dequeued, and multiple request headers aren't used again. Not sure why this is occurring yet.

Ultimately, both the BIOS and DF drivers are showing fairly good performance on both boot times and multi drive copies. A previous num_sector bug caused the IODELAY emulated delay to be incorrect for the DF driver, and we're now seeing boot times of 4.5 secs for BIOS and 5.0 secs for DF, very close. Both are half of what they used to be, so progress is looking good.

@ghaerr ghaerr merged commit 2f4d53d into master Oct 17, 2024
2 checks passed
@ghaerr ghaerr deleted the df branch October 17, 2024 00:33
@toncho11
Copy link
Contributor

Tested faster boot on Amstrad 1640 with fd360 minix.
Standard: 14.23s
DF: 11.75s

@ghaerr
Copy link
Owner Author

ghaerr commented Oct 17, 2024

Thanks @toncho11!

I can see my IODELAY "emulation" times are way off, although they take no account for the CPU speed.

I'm also very surprised at the faster DF boot: not sure why that is, except that it's reading full tracks rather than semi-partial tracks, which could make a big difference, although my testing shows not.

Do you have any ideas about how much this "faster boot" version is compared to 0.8.0 or 0.8.1?

@toncho11
Copy link
Contributor

Before it was 2.4 of horizontal lines ....... on the screen, now it is 1.6 although I do not know from where you start the count.
It looks faster and will speed up future testing!
Thank you!

@ghaerr
Copy link
Owner Author

ghaerr commented Oct 17, 2024

The dots on the screen are the kernel loading, sounds like the kernel is smaller now, not sure why on that?!

The timer starts when the kernel code is jumped to, indicated by "START" displayed on the screen, which will be after the dots. Perhaps we should leave on the elapsed time display for a while so that comparisons can be made. Glad to hear it seems faster, I think it is quite a bit faster but sounds like we need a stopwatch to tell for sure.

@toncho11
Copy link
Contributor

I can confirm the numbers below (above):

Standard: 14.23s
DF: 11.75s

DF is faster by approximately 20% on my Amstrad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants