![]() "y" is the current target, assuming that the current u weightings are correct.Īfter finding "better" uweightings in the least-squares optimization, I recompute the new target y and compute the next u until a fixed point is reached. Y is simply the current weighting result y = A u_prev. With "A" being the matrix of votes done by people in the "re-rating" group and "agg" being the aggregated voter vector. Users with more than X ratings are used to compute weightings, specifically we attempt to reweight them such that These users get aggregated into one mean (i.e. The "aggregation" group is comprised of the set of users with less than X number of ratings. The idea is to split the voters into an "aggregation" and a "re-rating" group: I now have an initial "smart" selection of messages based on a hierarchical consensus finding scheme: The vote aggregation script is much more sophisticated and computes optimal weightings for each power-user based on aggregations of non-power users: The truth is probably somewhere in the middle: since we only have ~3 rankers per message, if we had one that is consistently screwing with things, he would be hard to detect since by sheer luck he would agree enough with another ranker to overall vanish in the correlations. there are so few of them, that they overall don't affect the results sufficiently negatively to be noticable.there are so many of them, the ranking results are so far beyond repair that the merged rankings themselves make no sense.This doesn't mean that individual bad actors don't exist, but it does mean that either Currently it does not find statistically significant outliers that are also bad actors ![]() The rankings is relatively simple and computes the expected kendall-tau correlation of a user to the consensus vote. This adds in two scripts to compare rankings and votes.
0 Comments
Leave a Reply. |