The issue is that MapReduce joins are typically implemented by giving records that match on some field the same reduce key so that they get sent to the same reducer. So anything to get around this is going to be a bit of a hack, but it is possible...
Here's what I would recommend: for each input record, generate three copies, each with a new "key" field that is prefixed by the field it's coming from. So for example, say you had the following input:
(ip=1.2.3.4, session=ABC, cookie=123)
(ip=3.4.5.6, session=DEF, cookie=456)
Then you would generate
(ip=1.2.3.4, session=ABC, cookie=123, key=ip_1.2.3.4)
(ip=1.2.3.4, session=ABC, cookie=123, key=session_ABC)
(ip=1.2.3.4, session=ABC, cookie=123, key=cookie_123)
(ip=3.4.5.6, session=DEF, cookie=456, key=ip_3.4.5.6)
(ip=3.4.5.6, session=DEF, cookie=456, key=session_DEF)
(ip=3.4.5.6, session=DEF, cookie=456, key=cookie_456)
And then you could simply group on this new field.
I'm not too familiar with scalding/cascading (although I've been meaning to learn more about it) but this would definitely conform to how joins are generally done in Hadoop.