Cat@ponder.cat to Technology@lemmy.worldEnglish · edit-23 days agoPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiexternal-linkmessage-square29fedilinkarrow-up1210arrow-down114
arrow-up1196arrow-down1external-linkPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiCat@ponder.cat to Technology@lemmy.worldEnglish · edit-23 days agomessage-square29fedilink
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2arrow-down1·2 days agoIn the 32B range? I think we have plenty of uncensored thinking models there, maybe try fusion 32B. I’m not an expert though, as models trained from base Qwen have been sufficient for that, for me.
minus-squareEven_Adder@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up2arrow-down1·2 days agoI just want to mess with this one too. I had a hard time finding an abliterated one before that didn’t fail the Tiananmen Square question regularly.
In the 32B range? I think we have plenty of uncensored thinking models there, maybe try fusion 32B.
I’m not an expert though, as models trained from base Qwen have been sufficient for that, for me.
I just want to mess with this one too. I had a hard time finding an abliterated one before that didn’t fail the Tiananmen Square question regularly.