Model Detail
opus-mt-tc-big-tr-en
—LLaVA-Octopus: Unlocking Instruction-Driven Adaptive Projector Fusion for Video Understanding
arXiv:2501.05067v3 Announce Type: replace-cross Abstract: In this paper, we introduce LLaVA-Octopus, a novel video multimodal large language model. LLaVA-Octopus adaptively weights features from different visual projectors based on user instructions, enabling us to leverage the complementary strengt

Anthropic releases a new Opus model amid Mythos Preview buzz
Anthropic has released its most powerful "generally available" model to date: Claude Opus 4.7. The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It's also supposed to be better at a
Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6
arXiv:2604.04289v1 Announce Type: cross Abstract: When an LLM deobfuscates JavaScript, can poisoned identifier names in the string table survive into the model's reconstructed code, even when the model demonstrably understands the correct semantics? Using Claude Opus 4.6 across 192 inference runs on
Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains
arXiv:2604.02343v1 Announce Type: new Abstract: We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can i