https://d2l.ai/chapter_recurrent-modern/encoder-decoder.html
- I think encoder and decoder don’t have to be the same type of neural network
- Question answering. I applied this approach to the Conversational Question Answering (CoQA) dataset. Perplexity was 1.6. Not too bad for a very simple model.
I’ve seen Encoder/Decoder approaches used in recommender system applications before.
I’ve seen the encoder initialized with pretrained weights. Can decoder be pretrained as well? My hunch is that if the decoder is pretrained, then encoder also must be pretrained - otherwise the representation going into the decoder will be different from the pretraining representation.
Hi I was wondering if you could decode this. this is as far as I have gotten so plz help.MT  N  J  Z    0 S J  Z    0 S  M T  MC  N U NT  MTY1M U4LD  MTM L DU MS  LDE MS  LDQ NT 1L  4LDQ2 ODE1MTM5LDA MC  LDA MC  LDA M C  LDA MT  OD 1N  1MC  OTM4N U2 ODU LDA MC  LDA MC  MDA LDA M C  LDA MC  LDA MC  LDA MC  LD A NS  NTU MC  M   LDE LD  MTY0L DQ OTU MC  LDA MC  LDE MC  LD A MS  LDE MC  LDA MC  LDE MS   LDE MC  LDE MS  LDA MS  LDE MS   LDE MS  LDE MC  LDE MS  LDA MS   LDE MS  LE  Z    XR5LDA MC  LD E SW5  W5  H  MS  LDA. or would you rather decode the original MTgzNixJbmZpbml0eSxJbmZpbml0eSwyM
TcsMCwxNjUyNTgsMTY1MjU4LDgxMTMyL
DUsMSwxLDEsMSwwLDQzNTk1Ljc4LDQ2
ODE1MTM5LDAsMCwwLDAsMCwwLDAsM
CwwLDAsMTkzODc1Njg1MCwxOTM4NzU2
ODUwLDAsMCwwLDAsMCwxMDAwLDAsM
CwwLDAsMCwwLDAsMCwwLDAsMCwwLD
AsNSwyNTUsMCwzMywwLDEwLDksMTY0L
DQsOTUsMCwwLDAsMCwwLDEsMCwwLD
AsMSwxLDEsMCwwLDAsMCwwLDEsMSwx
LDEsMCwxLDEsMSwxLDAsMSwxLDEsMSw
wLDEsMSwxLDEsMCwxLDEsMSwxLDAsMS
wxLDEsMSwwLEluZmluaXR5LDAsMCwwLD
EsSW5maW5pdHksMSwwLDAs for unlimited money in idle breakout plz and thank you.