5.16组会笔记

less than 1 minute read

Published:

Weak Supervision

熵 -> uniform distribution \(-\sum^i_m q_i \, log \, d_i ??\) Self-Training/Self-Supervised

Relieve Bias ?

EM optimization(similarity based loss + MIL/hinge loss/VAE)

Get higher-level supervision over unlabeled data from SMEs:

启发式规则

Distant Supervision, Constraints, Expected Distribution, Invariances.

External Knowledge

Label-free EMNLP 2018

Based on TransE pretrianed embedding, start entity + sentence embedding(predicted relation vector from neural model) == end entity