So, given you know how the NBC works, you'll see that if you had sentences/reviews or etc. you'll see that for every negative sentence we see, we add to the word counts of the words in the negative class, which increases the probability of that word being a negative word in the sense of the NBC (due to the feature independence requirement) and that holds for positive examples as well. The more granular you get with the words then the better your classification will go, but that is not always true! Think about your feature set.
With the NBC, we assumed that every word in our document(sentence/review) is independent of each other, but by intuition we KNOW it is not a good assumption. We can think of using bi-grams to help increase accuracy (True positives(TP) /(TP+False negatives(FN))). That alone is not enough due various factors such as words in the sentence not contributing to the sentiment and having negations such as not not not, and etc. Basically, words in sentences are not orthogonal! In other words, they are dependent on each other. We have more advanced text information extraction techniques for this to help the NBC with accuracy and that mainly has to do with FEATURE EXTRACTION. If you could come up with a way to extract all conjunctions/pairings of words, such that the features are orthogonal, that extracts the key information that contributes to sentiment, then you have done your job and the assumption of independence works therefore the NBC will get better with a bigger dataset. It is one heck of a challenging problem! :)
I end with this: The better your feature set, the better the accuracy the classifier will get given more data, so the question isn't why the Bayes classifier makes sense with sentiment analysis, but how you can derive a good feature set for the NBC.