Attendees: Ann, Brook, Mehdi, Rhys, Thomas B, Thomas W.
Apologies: Andrea, Leif, Miro.
Notes:
Broadly speaking we split monitoring stuff up in to three areas:
- Operational Monitoring (including things like FedLab and CoCo Monitoring).
- Management Reports.
- Usage Monitoring.
We also have to think about the split between monitoring that is specifically around the eduGAIN service and broader federation requirements (although the first is the priority within this task, we have interest in other approaches). There is a summary of monitoring tools on the REFEDS wiki, although this doesn't currently cover usage monitoring.
For eduGAIN, https://technical.edugain.org/entities.php is now well developed and working well. We could do more to promote this, but it has proved useful in several scenarios already. https://validator.edugain.org/ has the same look and feel, so from a user perspective that works well. The eduGAIN team are looking at switching on notifications based on validator results and thinking about how to make that work well. It would be useful to think about integrating https://monitor.edugain.org/coco/ and https://access-check.edugain.org/accountmanager together with the entities and validator tool to give a consistent user experience (focus on user experience, doesn't need to be the same back-end). This should be discussed in more detail at the Zurich meeting but also depends on the wider issues of how the eduGAIN websites are being managed.
Usage stats remains one of the harder elements, despite RAPTOR and AMAAIS developments, convincing institutions to provide this information in a mesh environment is difficult. RS gave an update on the problems of rolling out RAPTOR. At the moment there is little incentive for institutions to engage in this and quite a lot of work - need to look at a variety of models with centralised elements to take this forward.
A good starting point would be to work with the hub and spoke federations and ask them to pass current data across to see if we can use their formats to make some initial judgements. Need to look at who can do some processing on this information.
What statistical data CAN we provide now if we assume we cannot provide genuine no. of authentications?
- No of entities (easy, see MET / edugain stats)
- No of users (NH collating via REFEDS - will never be completely accurate but still a useful number).
- Predicted average logins per entity per month based on guesstimates extrapolated from a cross section of federation information?
- Ratio of access that is local / via edugain per federation?
- Other?
No update on F-ticks as Leif and Miro couldn't make it - NH will follow up with the them separately.
FedLab discussions also dealt with separately (notes on wiki) - NH will be asking people to help with testing interoperability tests on the existing platform as part of SA5 task.
Questions to explore further at Zurich meeting:
- Can we get usage data from the Hub and Spoke federations now?
- How can we improve the user experience for the 4 main edugain monitoring tools at the moment?
- Should the harmonisation task write a position / value paper on statistics and requirements?
- What other statistics can we provide in the meantime, if we consider than genuine / real information on complete authentications is too difficult? (brainstorm some of these in Zurich).
- Can we do more work on a "centralised RAPTOR" approach along the lines of Edugate work?