spamassassin-dev December 2011 archive
Main Archive Page > Month Archives  > spamassassin-dev archives
spamassassin-dev: Re: Regarding Scoring of Mailspike

Re: Regarding Scoring of Mailspike

From: Warren Togami Jr. <wtogami_at_nospam>
Date: Tue Dec 13 2011 - 03:11:21 GMT
To: "Kevin A. McGrail" <>

On Mon, Dec 12, 2011 at 1:34 PM, Kevin A. McGrail <> wrote:
> As for the gap, the behavior of what people think things do and what they
> actually do on the infrastructure sometimes differs more than I like.  You
> and I need to setup a box that replaces zones and zones2 that is more
> modern.

Totally in agreement. This would be a pretty major project even for
someone working full-time. Maybe we could prepare now to get Google
Summer of Code funding for this?

>> * Set the scores to be conservative (see AXB's post) prior to GA
>> balancing.  Let ZBI and L's float in GA rescoring.  Then adjust L's to
>> be linear and increase the H's conservatively after we look at the GA
>> results and do some quick fp-fn tests across the entire set.
>> if (version>= 3.400000)
>> #MAILSPIKE RBL ENABLED FOR SA3.4 and above - BUG 6400
>> # FLOATING SCORES FOR GA - adjust after GA to make L3 to L5 linear
>>   score RCVD_IN_MSPIKE_ZBI     2.7
>>   score RCVD_IN_MSPIKE_L5      2.5
>>   score RCVD_IN_MSPIKE_L4      1.7
>>   score RCVD_IN_MSPIKE_L3      0.9
>> # TEMPORARILY FIXED SCORES - adjust these higher after we look at the GA
>> results
>> # (pending discussion: none of the whitelists should effect the
>> blacklist balancing as they are orthogonal.)
>> # I suspect these should be something like H3 = 0.5, H4 = 1.0, H5 =
>> 2.0, alongside big reductions in IADB and DNSWL.
>>   score RCVD_IN_MSPIKE_H3      -0.01
>>   score RCVD_IN_MSPIKE_H4      -0.01
>>   score RCVD_IN_MSPIKE_H5      -0.01
>> ## These are informational rules, useful in statistical comparisons
>> # FIXED SCORES - leave these scores this way for release
>>   score RCVD_IN_MSPIKE_BL      0.01
>>   score RCVD_IN_MSPIKE_WL      -0.01
>> endif
> Your process makes sense and I'll look forward to reviewing the proposed
> rule scores as much as the process to determine your score recommendations.

I suspect the auto-balanced scores will be much lower than what you
proposed, but given our inability to do a proper apples-to-apples
comparison due to the "reuse" issue, combined with our gut feelings
that MSPIKE is better than what we observe in masscheck, I would
probably manually adjust the scores higher a bit.