In the Anglophone Caribbean, Spanish plays an important role, both in high-school education and as a desired communication asset. This study forms part of on-going research aimed at developing rating scales for the assessment of interactive speaking performance for an advanced level Spanish course at the university level. Participants (n=30) are Spanish majors and minors who are in their final year of their degree. All of the participants have received at least nine (9) years of Spanish language instruction. In this academic context, Spanish language instruction and the respective curriculum follow the Common European Framework of Reference (CEFR) level C1. Classroom teaching activities include discussions and debates on complex topics (politics, immigration, gender, etc.) as these types of tasks demand more interactive ability of test takers at this level. Similar types of tasks are used in each of the two peer-to-peer oral tests which are assessed by two raters. The performance data-driven approach has been chosen in order to develop rating scales. The set of descriptors provided by the CEFR?s rating scales for conversation for the C1 level provides allowances for a wide range of discrepancies for similar test performances. Feedback sessions on participants? performance provided after each test as well as raters? verbal reports while assessing students? performances and test-takers self-assessment not only allow us to arrive at a more robust definition of the construct of interactional competence for this particular speaking test, but also enable the operationalization of this construct, provide more specific descriptors for the new rating scales, and facilitate the validation process.