Skip to content
This repository has been archived by the owner on Oct 31, 2022. It is now read-only.

Why the label of training is like this #58

Open
SchenbergZY opened this issue Aug 15, 2020 · 0 comments
Open

Why the label of training is like this #58

SchenbergZY opened this issue Aug 15, 2020 · 0 comments

Comments

@SchenbergZY
Copy link

From the code in train.py i found the loss function:

        loss = tf.reduce_mean(
            tf.nn.sparse_softmax_cross_entropy_with_logits(
                labels=context[:, 1:], logits=output['logits'][:, :-1]))

But why does it have the slice [:, 1:] in labels and [:, :-1] in logits? why the slices are not the same?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant