Talk is open source and install-able by anyone on their own servers and at their leisure. They have tech docs (very little, but looks to the point). Well anyway, since I have not installed it to try it, I'll go off from what I see.
Let's take a look at the UI for the admin/moderators
Very simple looking and to the point. Keyboard shortcuts I guess were desired by moderators? Fat fingering is a common problem, so I assume the decisions are easily reversible (undo shortcut?). In terms of sorting, newest first is questionable. Some comment streams are high velocity, so do I really want to look at newest first from the get go? Looking at the comments stream, I see links are highlighted and I assume it will expand out rich url (html) links as well. Not every link is a bad place, so that red colored info button looks very ominous and probably ignored in the long run as a UX thing. UX is hard to get right since each community has way of user interaction. Let me give my thoughts on moderation.
In the end, even for the newsroom industry, we want to remove heavy moderation since it is a bottleneck, so I fail to see how adding shortcuts will help with one of the complaints/conversations around moderation (Light or heavy moderation). One of the "tenets" of Talk is the idea of getting moderators to focus on the positive comments, which again I fail to see why I would want that. My thoughts are that any website that wants a comments section only want humans to make decisions in the case that a machine cannot rather than focus on positive or negative spectrum.
Since the software is self hosted, I wonder how the project will expand to include machine learning techniques to harvest data and create models. Is there a way to collect that data and have someone do data analysis to fit a model and hook that model right in to a moderation decision engine. You may want to have traffic/comment shadowing where both models are used and you can test moderation performance to see if you want to keep the model or make adjustments. Are those models shareable? Do publishers really care that much about owning their data that they are unwilling to share moderation data or commenting data (comments are public by the way)?
Alright, let's look at what a community member might see below an article with Mozilla Talk.
So, of course I look at this UI and first question that pops up is how is this different from Disqus or Discourse? I wonder if I have to create an account on each site that embeds their own instance of Talk. Anonymity on news sites? Please, not going to happen. That respect button looks like a +1 sort of thing, so does that give me reputation in this particular community? I'm way more interested in moderation automation and UX and less of the UI itself, but I have to indulge myself.
I have to wonder if self hosted commenting systems are scale-able at all outside of small installations. The whole point of commenting systems to me is to increase engagement so that users will come back and form a sense of community. Once you get to a certain size, you are going to have separate communities/categories of people. Who is reputable with respect to particular categories? Should users be marked as reputable or should other users figure that out by reading comments from that user? It seems commenting systems are essentially forums, so why separate these concepts? I thought that Discourse made a good step with forums as embedded comments... There are missing pieces here that need to be addressed by Mozilla's Talk in order for it to be taken seriously as it looks like more of the same to me.
Pretty tired of writing about this topic today. I do want to bring back my own commenting system back from the dead though as I believe I care enough about it to have notes after notes on it and the target customers.