This was the course that got me into my FP journey about 8 years ago. As you said, it is very difficult but if you can trudge through it, it really pays off in day to day work.
This is similar to how we handle protobuf, avro and json. Each service has their our own metadata store that contains that services registered schemas. In a predeploy job we check those schemas against confluents schema registry for breaking changes. If there are changes the service doesn't get deployed.
"This is why Kafka cannot be regarded as a database; not without twisting the basic definition". While I agree with this statement, saying that kafka is not a database because it doesn't have all the bells and whistles of a traditional database is dangerous. It feeds corpo politicians who don't believe kafka can replace a database which is false. So even if it's not 100% true (yet) I still thinks it makes more sense to call Kafka a database to make it more accessible.