PIX is an online public service for Ecuador WhatsApp Number List and certifying digital skills. This certification is recognized by the State and it is the new certification of digital skills for all pupils and students in France since September 2019, replacing the C2i (computer and internet certificate). These 5 modules are broken down into 2 to 4 sub-modules, assessed on 8 levels. At the time of writing this article (February 2021), only the first 6 levels are accessible. The others are under development. As we will see, it is not necessary to wait until the 8 levels of each sub-module are out to start evaluating your digital skills: it is already going quite far!
Adaptive learning The assessment of the level in the competency tested is done by answering increasingly complex questions. The skills assessment is done according to the principle of adaptive learning: the questions asked will vary depending on the answers you provide. As a result, it is difficult to say the completeness of the skills tested by PIX (you have to go see the code , free, to find out) If the proof of what I am putting forward in SPOIL does not interest you, I encourage you to go to the chapter “Beyond the observation, what is this analysis for? ”
PIX, what is it
The first use of a satisfaction survey is to identify the best actions to take to improve the satisfaction of users of a service. Once the survey has been conducted, the most common approach is to analyze the sources of respondents’ dissatisfaction in the survey responses, then to deduce a detailed and prioritized action plan to remedy this dissatisfaction In the diagram opposite, the sub-criteria of satisfaction linked to the provision of internet access are positioned according to the level of satisfaction of the respondents. But is this prioritization method sufficient? For my part, I do not believe it because it does not take into account 3 realities:
When we have to validate an action plan, we often have a constrained budget / deadline pair: the means that we put in one place will not be placed elsewhere. In other words, this method does not tell us whether it is necessary to invest in maintaining certain criteria (in terms of satisfaction). On the other hand, being always constrained by a budget, it could be relevant to target the actions that will really matter to the customers, namely the actions relating to the criteria most important to them. We also see that, within the framework of the ARCEP survey, the average satisfaction on each sub-criterion extends over a small interval (scores ranging from 6.4 to 7.2):
it seems risky to undertake actions – which will cost – on the basis of prioritization on the single satisfaction indicator. I therefore propose to add a dimension to the prioritization method: that of the importance of each criterion for customers. Let us first apply this dimension by considering the importance as declared by the respondents for each sub-criterion: This does not help us much … On the one hand we find here the “white noise” effect (everything is important) which does not help in a prioritization exercise, on the other hand, we have just seen that criteria has priori important for respondents are not in reality.
P2: Criteria where satisfaction must be maintained at a minimum : those which are important for customers and where satisfaction is already high P3: The criteria to be improved later if the budget / deadlines allow it : those which are not very important for the customers and where the customers are the least satisfied P4: The “room for maneuver” criteria : those where customers are satisfied but which are of little importance to them With this method, we see that the prioritization of criteria in our action plan has changed significantly. In addition, this prioritization now takes into account the need to keep certain criteria up to date: