Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed Several Typos #11178

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion orbit/actions/conditional_action.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def __init__(
"""Initializes the instance.

Args:
condition: A callable accepting train or eval outputs and returing a bool.
condition: A callable accepting train or eval outputs and returning a bool.
action: The action (or optionally sequence of actions) to perform when
`condition` is met.
"""
Expand Down
2 changes: 1 addition & 1 deletion orbit/actions/new_best_metric.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class NewBestMetric:
if it is achieved. These separate methods enable the same `NewBestMetric`
instance to be reused as a condition multiple times, and can also provide
additional preemption/failure safety. For example, to avoid updating the best
metric if a model export fails or is pre-empted:
metric if a model export fails or is pre-emptied:

new_best_metric = orbit.actions.NewBestMetric(
'accuracy', filename='/model/dir/best_metric')
Expand Down
2 changes: 1 addition & 1 deletion orbit/actions/new_best_metric_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def test_json_persisted_value(self):
tempfile = self.create_tempfile().full_path
value = {'a': 1, 'b': 2}
persisted_value = actions.JSONPersistedValue(value, tempfile)
# The inital value is used since tempfile is empty.
# The initial value is used since tempfile is empty.
self.assertEqual(persisted_value.read(), value)
persisted_value = actions.JSONPersistedValue('ignored', tempfile)
# Initial value of 'ignored' is ignored, since there's a value in tempfile.
Expand Down
2 changes: 1 addition & 1 deletion research/adversarial_text/graphs.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
flags.DEFINE_integer('batch_size', 64, 'Size of the batch.')
flags.DEFINE_integer('num_timesteps', 100, 'Number of timesteps for BPTT')

# Model architechture
# Model architecture
flags.DEFINE_bool('bidir_lstm', False, 'Whether to build a bidirectional LSTM.')
flags.DEFINE_bool('single_label', True, 'Whether the sequence has a single '
'label, for optimization.')
Expand Down
2 changes: 1 addition & 1 deletion research/attention_ocr/python/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -579,7 +579,7 @@ def label_smoothing_regularization(self, chars_labels, weight=0.1):
Uses the same method as in https://arxiv.org/abs/1512.00567.

Args:
chars_labels: ground truth ids of charactes, shape=[batch_size,
chars_labels: ground truth ids of characters, shape=[batch_size,
seq_length];
weight: label-smoothing regularization weight.

Expand Down
2 changes: 1 addition & 1 deletion research/attention_ocr/python/model_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ def export_model(export_dir,
crop_image_height=None):
"""Exports a model to the named directory.

Note that --datatset_name and --checkpoint are required and parsed by the
Note that --dataset_name and --checkpoint are required and parsed by the
underlying module common_flags.

Args:
Expand Down
2 changes: 1 addition & 1 deletion research/attention_ocr/python/model_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ def test_sequence_loss_function_without_label_smoothing(self):
self.assertEqual(loss_np.shape, tuple())

def encode_coordinates_alt(self, net):
"""An alternative implemenation for the encoding coordinates.
"""An alternative implementation for the encoding coordinates.

Args:
net: a tensor of shape=[batch_size, height, width, num_features]
Expand Down
2 changes: 1 addition & 1 deletion research/audioset/vggish/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ VGGish depends on the following Python packages:

These are all easily installable via, e.g., `pip install numpy` (as in the
sample installation session below). Any reasonably recent version of these
packages shold work.
packages should work.

VGGish also requires downloading two data files:

Expand Down
2 changes: 1 addition & 1 deletion research/audioset/vggish/vggish_slim.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ def load_vggish_slim_checkpoint(session, checkpoint_path):

This function can be used as an initialization function (referred to as
init_fn in TensorFlow documentation) which is called in a Session after
initializating all variables. When used as an init_fn, this will load
initializing all variables. When used as an init_fn, this will load
a pre-trained checkpoint that is compatible with the VGGish model
definition. Only variables defined by VGGish will be loaded.

Expand Down
2 changes: 1 addition & 1 deletion research/audioset/vggish/vggish_smoke_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

This is a simple smoke test of a local install of VGGish and its associated
downloaded files. We create a synthetic sound, extract log mel spectrogram
features, run them through VGGish, post-process the embedding ouputs, and
features, run them through VGGish, post-process the embedding outputs, and
check some simple statistics of the results, allowing for variations that
might occur due to platform/version differences in the libraries we use.

Expand Down
6 changes: 3 additions & 3 deletions tensorflow_models/tensorflow_models_pypi.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
"\n",
"## Colab environment setup. To use a stable TF release version\n",
"## because of the possible breakage in tf-nightly.\n",
"# !pip3 install -U numpy\u003e=1.20\n",
"# !pip3 install -U numpy>=1.20\n",
"# !pip3 install -q tensorflow==2.8.0"
]
},
Expand Down Expand Up @@ -101,7 +101,7 @@
"## Check out modules\n",
"\n",
"**Note: As the TensorFlow Models (NLP + Vision) 2.9 release which is tested for this notebook, we partially exported selected modules but the APIs are not stable. Also be aware that, the\n",
"modeling libraries are advancing very fast, so we generally don't guarantee compatability between versions.** "
"modeling libraries are advancing very fast, so we generally don't guarantee compatibility between versions.** "
]
},
{
Expand Down Expand Up @@ -245,7 +245,7 @@
"output_type": "stream",
"text": [
"spine_net\n",
"{'4': \u003cKerasTensor: shape=(1, 8, 8, 128) dtype=float32 (created by layer 'spine_net')\u003e, '5': \u003cKerasTensor: shape=(1, 4, 4, 128) dtype=float32 (created by layer 'spine_net')\u003e, '6': \u003cKerasTensor: shape=(1, 2, 2, 128) dtype=float32 (created by layer 'spine_net')\u003e}\n"
"{'4': <KerasTensor: shape=(1, 8, 8, 128) dtype=float32 (created by layer 'spine_net')>, '5': <KerasTensor: shape=(1, 4, 4, 128) dtype=float32 (created by layer 'spine_net')>, '6': <KerasTensor: shape=(1, 2, 2, 128) dtype=float32 (created by layer 'spine_net')>}\n"
]
}
],
Expand Down
Loading