-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow dashboard loading #17548
Comments
Please enable debug mode for your user (need to be on s super-user profile). Then when on the page with the dashboard, click the SQL requests widget in the debug bar at the bottom of the screen (2nd button). Click the "Time" column header to sort by the slowest. Can you identify the slow query(s)? |
We didn't know about the debug mode yet, but upon following your instructions, we noticed that it seems like it isn't the database that is slowing down the loading of the dashboard, but something related to ajax requests. However, I mentioned a possible database issue because the loading of the dashboard seemed to have slowed down after reducing the size of the DB instance. Here are the screenshots that show the time it takes to load the dashboard: It takes quite a while to load the "Network inventoried devices" widget, but there aren't any slow query at all: Do you have any idea of what is causing all this delay in loading this specific widget and the dashboard as a whole? |
It looks like there is a bug that kept the data for the requests made to load the dashboard cards from being saved on the server side. So, none of the SQL requests are included in the debug bar. For the request timings, this is the entire request time including the time spent stalled. In debug mode, your session file isn't closed early so each request has to wait for the previous ones to finish in order to get a lock on the session file to be able to open it. You can see more detailed timing info in your browser's developer tools (F12 key > Network tab > Select a request > Timing sub-tab in Chrome). For the missing data from the dashboard requests, I opened #17567 that seems to resolve the issue. |
I'll wait for the next release to include the fix from issue #17567 so that we can get more information on which queries are slowing down the loading of some dashboard cards. If this problem occurs to more people, then maybe some patch would be necessary. Otherwise, we'll see how we can optimize our database to improve the user experience. |
Since the patch is quite small and has a quite low impact, you maybe can consider applying it right now; so you can report problematic queries as soon as possible. |
Hello everyone, i'm having the same issue as @rafaelse we are running GLPI 10.0.16 behind an AWS Application loadbalancer (ALB) , with 2 ec2 instances and an EFS to store GLPI configuration the database is handled by a db.t4g.small RDS instances with mariadb 10.6.18. dashboards cards are slow to load and some of the cards won't load at all, we discovered this issue when we started using the helpdesk dashboard who has a lot of issue while loading. some of them reach the maxium request time and throw a 504 error |
There has been no activity on this issue for some time and therefore it is considered stale and will be closed automatically in 10 days. If this issue is related to a bug, please try to reproduce on latest release. If the problem persist, feel free to add a comment to revive this issue. You may also consider taking a subscription to get professionnal support or contact GLPI editor team directly. |
I would like to add that those boxes in "Personal View" and "Group View" are also very slow to load (>=10s), but I noticed that they only take that long when we have plugins activated. I should also mention that this is happening with our staging environment where we're testing using AWS S3 mountpoint and EFS for volumes, whereas our production environment backed by EBS volumes is running fine. I've also tried EBS in this staging environment, but the slowness still persists. In case you guys have any hint as to what the problem might be, it would be most welcome. Otherwise, I will try to replicate the production environment and apply changes one by one until the cause comes up. As of now, we ruled out database (RDS) as the source of delay. |
This problem afore mentioned was already solved, still keeping S3 and EFS as persistent volumes. However, returning to the original issue, the referred box still takes a while to load. After applying this fix, it is now noticeable that the above average delay is due to a long SQL query: SELECT DISTINCT `glpi_networkequipments`.`id` AS id, '789899' AS currentuser,
`glpi_networkequipments`.`entities_id`, `glpi_networkequipments`.`is_recursive`, `glpi_networkequipments`.`name` AS `ITEM_NetworkEquipment_1`,
`glpi_networkequipments`.`id` AS `ITEM_NetworkEquipment_1_id`,
`glpi_entities`.`completename` AS `ITEM_NetworkEquipment_80`, `glpi_manufacturers`.`name` AS `ITEM_NetworkEquipment_23`, `glpi_networkequipmentmodels`.`name` AS `ITEM_NetworkEquipment_40`, `glpi_locations`.`completename` AS `ITEM_NetworkEquipment_3`, GROUP_CONCAT(DISTINCT CONCAT(IFNULL(`glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`name`, '__NULL__'),
'$#$',`glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`id`) ORDER BY `glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`id` SEPARATOR '$$##$$')
AS `ITEM_NetworkEquipment_126`,
GROUP_CONCAT(DISTINCT CONCAT(IFNULL(`glpi_vlans_30b720f4ff8116eb3dfe981ed77541e0`.`name`, '__NULL__'),
'$#$',`glpi_vlans_30b720f4ff8116eb3dfe981ed77541e0`.`id`) ORDER BY `glpi_vlans_30b720f4ff8116eb3dfe981ed77541e0`.`id` SEPARATOR '$$##$$')
AS `ITEM_NetworkEquipment_88`,
`glpi_networkequipments`.`serial` AS `ITEM_NetworkEquipment_5`, `glpi_networkequipments`.`otherserial` AS `ITEM_NetworkEquipment_6`, `glpi_networkequipments`.`id` AS `ITEM_NetworkEquipment_178_id`, `glpi_networkequipments`.`name` AS `ITEM_NetworkEquipment_178_name`, COUNT(DISTINCT `glpi_problems_ad1d1102b08981d196811ea88b1a2f20`.`id`) AS `ITEM_NetworkEquipment_140`,
`glpi_networkequipments`.`date_creation` AS `ITEM_NetworkEquipment_121`, `glpi_networkequipments`.`date_mod` AS `ITEM_NetworkEquipment_19`, `glpi_autoupdatesystems`.`name` AS `ITEM_NetworkEquipment_72`
FROM `glpi_networkequipments`LEFT JOIN `glpi_entities`
ON (`glpi_networkequipments`.`entities_id` = `glpi_entities`.`id`
)LEFT JOIN `glpi_manufacturers`
ON (`glpi_networkequipments`.`manufacturers_id` = `glpi_manufacturers`.`id`
)LEFT JOIN `glpi_networkequipmentmodels`
ON (`glpi_networkequipments`.`networkequipmentmodels_id` = `glpi_networkequipmentmodels`.`id`
)LEFT JOIN `glpi_locations`
ON (`glpi_networkequipments`.`locations_id` = `glpi_locations`.`id`
) LEFT JOIN `glpi_ipaddresses` AS `glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`
ON (`glpi_networkequipments`.`id` = `glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`mainitems_id`
AND `glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`mainitemtype` = 'NetworkEquipment'
AND `glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`is_deleted` = '0' AND NOT (`glpi_ipaddresses_3714572b4d6732a91ac0d68a00a3c328`.`name` = '') ) LEFT JOIN `glpi_networkports`
ON (`glpi_networkequipments`.`id` = `glpi_networkports`.`items_id`
AND `glpi_networkports`.`itemtype` = 'NetworkEquipment'
) LEFT JOIN `glpi_networkports_vlans`
ON (`glpi_networkports`.`id` = `glpi_networkports_vlans`.`networkports_id`
)LEFT JOIN `glpi_vlans` AS `glpi_vlans_30b720f4ff8116eb3dfe981ed77541e0`
ON (`glpi_networkports_vlans`.`vlans_id` = `glpi_vlans_30b720f4ff8116eb3dfe981ed77541e0`.`id`
) LEFT JOIN `glpi_items_problems`
ON (`glpi_networkequipments`.`id` = `glpi_items_problems`.`items_id`
AND `glpi_items_problems`.`itemtype` = 'NetworkEquipment'
) LEFT JOIN `glpi_problems` AS `glpi_problems_ad1d1102b08981d196811ea88b1a2f20`
ON (`glpi_items_problems`.`problems_id` = `glpi_problems_ad1d1102b08981d196811ea88b1a2f20`.`id`
)LEFT JOIN `glpi_autoupdatesystems`
ON (`glpi_networkequipments`.`autoupdatesystems_id` = `glpi_autoupdatesystems`.`id`
)
WHERE `glpi_networkequipments`.`is_deleted` = 0 AND `glpi_networkequipments`.`is_template` = 0 AND ( (`glpi_autoupdatesystems`.`name` LIKE '%GLPI Native Inventory%' ) ) GROUP BY `glpi_networkequipments`.`id` ORDER BY `ITEM_NetworkEquipment_1` ASC It takes approximately 7s to run and is the only slow box in our dashboard. |
I upgrade from 10.0.16 to 10.0.17 and now i have the same slow loading in the dashboard ;-/ Any fix about it ? |
Solution, tuner le mod_evasive d'apache ;-) |
Code of Conduct
Is there an existing issue for this?
Version
10.0.16
Bug description
Some graphs from the dashboard are very slow to load or often don't load at all. It mainly applies to that graph that shows the amount of tickets by status through the months, but similar symptoms are noticed on graphs on top of the ticket listing.
We are running our workloads on AWS EKS, and the database on a db.t4g.small instance in RDS.
As can be seen on the image below, the load on the database is ok for most of its usage, except for the case mentioned above.
I wonder if there is any optimization that can be done to the database that will improve this scenario. Maybe adding some new indexes, or changing some MySQL parameters.
Even on a larger RDS instance, it takes a while to load some of the graphs.
Relevant log output
No response
Page URL
No response
Steps To reproduce
No response
Your GLPI setup information
Information about system installation and configuration
Server
GLPI constants
Libraries
SQL replicas
Plugins list
Anything else?
We ended up removing the slow graphs from the dashboards as to not affect the overall user experience with the system.
Any improvement on this front would be most welcome.
The text was updated successfully, but these errors were encountered: