<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Tensorflow guide Archives - Nerd Corner</title>
	<atom:link href="https://nerd-corner.com/tag/tensorflow-guide/feed/" rel="self" type="application/rss+xml" />
	<link>https://nerd-corner.com/tag/tensorflow-guide/</link>
	<description>Craft your dreams!</description>
	<lastBuildDate>Thu, 01 Jun 2023 15:45:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>

 
	<item>
		<title>NLP Application: Tensorflow.js vs Tensorflow Python &#8211; Part 1</title>
		<link>https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-1/</link>
					<comments>https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-1/#respond</comments>
		
		<dc:creator><![CDATA[Nerds]]></dc:creator>
		<pubDate>Wed, 31 May 2023 19:17:55 +0000</pubDate>
				<category><![CDATA[Software]]></category>
		<category><![CDATA[CSV dataset]]></category>
		<category><![CDATA[Decoder]]></category>
		<category><![CDATA[Encoder]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[hidden state]]></category>
		<category><![CDATA[JSON]]></category>
		<category><![CDATA[Keras]]></category>
		<category><![CDATA[Long short term memory]]></category>
		<category><![CDATA[loss function]]></category>
		<category><![CDATA[LSTM]]></category>
		<category><![CDATA[Map]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[natural library]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[nlp application]]></category>
		<category><![CDATA[OOV]]></category>
		<category><![CDATA[OOV Token]]></category>
		<category><![CDATA[optimizer]]></category>
		<category><![CDATA[Pad sequences]]></category>
		<category><![CDATA[Padding]]></category>
		<category><![CDATA[read in CSV]]></category>
		<category><![CDATA[rmsprop]]></category>
		<category><![CDATA[Tensorflow]]></category>
		<category><![CDATA[TensorFlow GPU]]></category>
		<category><![CDATA[Tensorflow guide]]></category>
		<category><![CDATA[Tensorflow python]]></category>
		<category><![CDATA[Tensorflow.js]]></category>
		<category><![CDATA[Tensorflow.js vs Tensorflow]]></category>
		<category><![CDATA[Tfjs]]></category>
		<category><![CDATA[tokenization]]></category>
		<category><![CDATA[txt dataset]]></category>
		<category><![CDATA[WordTokenizer]]></category>
		<guid isPermaLink="false">https://nerd-corner.com/de/?p=1440</guid>

					<description><![CDATA[<p>I am currently working on a project in which I want to program a German to Bavarian translator using machine learning. This is called Natural &#8230; </p>
<p>The post <a href="https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-1/">NLP Application: Tensorflow.js vs Tensorflow Python &#8211; Part 1</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I am currently working on a project in which I want to program a German to Bavarian translator using machine learning. This is called Natural Language Processing (NLP). A Google Library called Tensorflow is often used for the implementation. There is Tensorflow.js as well as Tensorflow (Python). Since I develop professionally with Angular and therefore I am familiar with TypeScript and JavaScript, I initially decided to use the NLP application in Tensorflow.js. I was naive enough to assume that the only difference between the two libraries would be the programming language used. This is definitely not the case! For my NLP project, some basic functions are missing in Tensorflow.js (such as a tokenizer).<a href="https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/"> In this post I explained the general differences between Tensorflow.js and Tensorflow (Python).</a></p>
<p>I spent many evenings trying to get my project to work with Tensorflow.js and failed in the end. Switching to Python brought the breakthrough I was hoping for! I would recommend everyone to use Python for NLP applications! Nevertheless, in this article I want to explain the differences between Tensorflow.js and Tensorflow in relation to my project using code examples. In between, I will also incorporate my newly accumulated knowledge into the respective sections as best I can.</p>
<p><em><strong>You might also be interested in:</strong> <a href="https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-2/">NLP application part 2 (OOV token, padding, creating the model and training the model).</a></em></p>
<h2>Reading in data</h2>
<p>First of all, you need a data set with which the model will be trained later. Here I can recommend <a href="https://www.kaggle.com/" target="_blank" rel="noopener">https://www.kaggle.com/</a>. There you find a large number of data sets for free use and even some code examples. You can either read in the data set via a link or download it and then read it in locally from the file system. A good data set should contain over 100,000 examples. Preferably also whole paragraphs. For example, this is what an English/French data set looks like as a CSV:</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter wp-image-1389 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/sample-dataset-tensorflow.png" alt="Sample dataset tensorflow CSV" width="1380" height="332" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/sample-dataset-tensorflow.png 1393w, https://nerd-corner.com/wp-content/uploads/2023/05/sample-dataset-tensorflow-300x72.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/sample-dataset-tensorflow-1024x246.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/sample-dataset-tensorflow-768x185.png 768w" sizes="(max-width: 1380px) 100vw, 1380px" /></p>
<p>First, the simple variant using Python:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-title="Read in CSV" data-enlighter-theme="atomic">import pandas as pd

# read in dataSet for training
df = pd.read_csv("./dataset/eng_-french.csv")
df.columns = ["english", "french"]
print(df.head())
print(df.info())</pre>
<p>We use the pandas library and read in the CSV with it. With the head() we can test if it worked and display the first 5 rows. With info() we get more information like number of columns and number of rows:</p>
<p><img decoding="async" class="aligncenter wp-image-1388 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/output-read-in-CSV-python.png" alt="output CSV read in with python pandas lib" width="575" height="516" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/output-read-in-CSV-python.png 583w, https://nerd-corner.com/wp-content/uploads/2023/05/output-read-in-CSV-python-300x269.png 300w" sizes="(max-width: 575px) 100vw, 575px" /></p>
<p>For comparison in Tensorflow.js (Tfjs) there is also a possibility to read in CSV:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-title="index.js" data-enlighter-theme="atomic">const tf = require("@tensorflow/tfjs");

async function readInData() {
  await tf.ready();
  const languageDataSet = tf.data.csv("file://" + "./ger_en_trans.csv");

  // Extract language pairs
  const dataset = languageDataSet.map((record) =&gt; ({
    en: record.en,
    de: record.de,
  }));

  const pairs = await dataset.toArray();

  console.log(pairs);
}

readInData();</pre>
<p>I tried at first to read in the same data set as in the Python version:</p>
<p><img decoding="async" class="aligncenter wp-image-1408 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/Tensorflow.js-ReadIn-Same-CSV.png" alt="read in same CSV as in tf" width="745" height="220" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/Tensorflow.js-ReadIn-Same-CSV.png 758w, https://nerd-corner.com/wp-content/uploads/2023/05/Tensorflow.js-ReadIn-Same-CSV-300x89.png 300w" sizes="(max-width: 745px) 100vw, 745px" /></p>
<p>Afterwards I wanted to shorten the headings in the original CSV, but this strangely gave me an error message when reading in. Even when I restored the CSV to its original state, the error remained:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1407 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/read-in-modified-CSV-before-it-breaks-tfjs.png" alt="error when reading in CSV" width="610" height="298" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/read-in-modified-CSV-before-it-breaks-tfjs.png 620w, https://nerd-corner.com/wp-content/uploads/2023/05/read-in-modified-CSV-before-it-breaks-tfjs-300x147.png 300w" sizes="auto, (max-width: 610px) 100vw, 610px" /></p>
<p>In the end, I decided to use a different data set:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1406 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/CSV-sample-screenshot-en-de.png" alt="other CSV data samples" width="375" height="356" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/CSV-sample-screenshot-en-de.png 389w, https://nerd-corner.com/wp-content/uploads/2023/05/CSV-sample-screenshot-en-de-300x285.png 300w" sizes="auto, (max-width: 375px) 100vw, 375px" /></p>
<p>This one was also much more readable when it was read in:</p>
<p><img loading="lazy" decoding="async" class="zoooom aligncenter wp-image-1405" src="https://nerd-corner.com/wp-content/uploads/2023/05/reading-in-the-german-english-CSV.png" alt="NLP Anwendung Tensorflow.js" width="825" height="276" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/reading-in-the-german-english-CSV.png 836w, https://nerd-corner.com/wp-content/uploads/2023/05/reading-in-the-german-english-CSV-300x100.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/reading-in-the-german-english-CSV-768x257.png 768w" sizes="auto, (max-width: 825px) 100vw, 825px" /></p>
<p>And here is the final result after the mapping:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1404 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/result-data-read-in-tfjs.png" alt="after mapping csv" width="515" height="99" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/result-data-read-in-tfjs.png 528w, https://nerd-corner.com/wp-content/uploads/2023/05/result-data-read-in-tfjs-300x58.png 300w" sizes="auto, (max-width: 515px) 100vw, 515px" /></p>
<p>Although Tfjs offers an extra function to read in the CSV, I still had more trouble than in the Python version. I have also not found a quick way to read in a data set in txt format. However, txt files are widespread!</p>
<p>Prepare data</p>
<p>I have often seen that a cleaning function was written for data preparation and that the output set also received a start and end token. I then wondered whether the input set, i.e. the encoder, also needs a start and end token. In the context of sequence-to-sequence models, however, the encoder does not need explicit start and end tokens. Its purpose is to process the input sequence as it is and produce a representation of the input.</p>
<p>The decoder, on the other hand, which generates the output sequence, usually benefits from the use of start and end tokens. These tokens help to mark the beginning and end of the generated sequence. The use of start and end tokens is therefore specific to the decoder. During training, the input sequence of the decoder includes a start token at the beginning and excludes an end token at the end. The output sequence of the decoder contains the end token and excludes the start token. In this way, the model learns to generate the correct output sequence based on the input.</p>
<p>When creating translations with the trained model, you start with the start token and generate one token after another until you hit the end token or reach a maximum sequence length. Adding start and end tokens to the decoder set improves the performance of the NLP translator model. It helps to establish clear sequence boundaries and supports the generation process by indicating where the translation starts and ends.</p>
<h4>In summary:</h4>
<ul>
<li>Encoder: No need for start and end tokens. Processes the input sequence as it is.</li>
<li>Decoder: Start and end tokens are helpful for generating the output sequence.</li>
</ul>
<p>We start again with the easy part, namely Python. We want to clean up the data we read in. This means converting everything to lower case and removing characters that are not part of the alphabet or punctuation marks. For this we need the regex library (re).</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="atomic">import re

def clean(text):
    text = text.lower()  # lower case
    # remove any characters not a-z and ?!,'
    # please note that french has additional characters...I just simplified that
    text = re.sub(u"[^a-z!?',]", " ", text)
    return text


# apply cleaningFunctions to dataframe
data["english"] = data["english"].apply(lambda txt: clean(txt))
data["french"] = data["french"].apply(lambda txt: clean(txt))

# add &lt;start&gt; &lt;end&gt; token to decoder sentence (french)
data["french"] = data["french"].apply(lambda txt: f"&lt;start&gt; {txt} &lt;end&gt;")

print(data.sample(10))</pre>
<p>I have simplified here. Since this is a French data set, one should actually write an extra cleaning function that also takes French letters like &#8220;ê&#8221; into account. The sample() function only serves to illustrate the data:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1391 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/random-sample-of-cleaned-data.png" alt="tensorflow random sample of cleaned data" width="630" height="385" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/random-sample-of-cleaned-data.png 639w, https://nerd-corner.com/wp-content/uploads/2023/05/random-sample-of-cleaned-data-300x184.png 300w" sizes="auto, (max-width: 630px) 100vw, 630px" /></p>
<p>In Tfjs the process is absolutely identical. I have created a cleanData() function and modified the previous code:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-theme="atomic">function cleanData(text) {
  //if necessary also remove any characters not a-z and ?!,'
  return text.toLowerCase();
}</pre>
<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-theme="atomic">const dataset = languageDataSet.map((record) =&gt; ({
   en: cleanData(record.en),
   de: "startToken " + cleanData(record.de) + " endToken",
 }));</pre>
<p>The result is therefore also identical to the Python approach:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1410 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/cleaned-up-tfjs-data.png" alt="cleaned up tfjs data" width="630" height="146" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/cleaned-up-tfjs-data.png 639w, https://nerd-corner.com/wp-content/uploads/2023/05/cleaned-up-tfjs-data-300x69.png 300w" sizes="auto, (max-width: 630px) 100vw, 630px" /></p>
<p>If the words &#8220;start&#8221; and &#8220;end&#8221; are part of regular sentences and are not used as special tokens to mark the beginning and end of sequences, then they should definitely not be replaced by corresponding indices during tokenisation. When tokenising, it is important to choose special tokens that are unlikely to occur in the actual input data. This ensures that the model can distinguish them from normal words and learns to produce the appropriate output sequences.</p>
<p>If the words &#8221; start&#8221; and &#8220;end&#8221; are regular words in the input sentences, consider using different special tokens to mark the start and end of sequences. A common choice is &#8221; &lt;start&gt;&#8221; and &#8220;&lt;end&gt;&#8221;. Using special tokens that are unlikely to be part of the regular vocabulary can ensure that they can be correctly identified and processed by the model during training and generation.</p>
<h4>For example, the tokenised sequences would look like this:</h4>
<ul>
<li>Decoder Input: [&#8220;&lt;start&gt;&#8221;, &#8220;hello&#8221;, &#8220;world&#8221;]</li>
<li>Decoder Output: [&#8220;hello&#8221;, &#8220;world&#8221;, &#8220;&lt;end&gt;&#8221;]</li>
</ul>
<h4>Therefore AVOID the following:</h4>
<ul>
<li>Decoder Input: [&#8220;start&#8221;, &#8220;hello&#8221;, &#8220;world&#8221;]</li>
<li>Decoder output: [&#8220;hello&#8221;, &#8220;world&#8221;, &#8220;end&#8221;]</li>
</ul>
<h2>Tokenisation</h2>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="atomic"># tokenization
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.text import Tokenizer
import numpy as np

# english tokenizer
english_tokenize = Tokenizer(filters='#$%&amp;()*+,-./:;&lt;=&gt;@[\\]^_`{|}~\t\n')
english_tokenize.fit_on_texts(data["english"])
num_encoder_tokens = len(english_tokenize.word_index)+1
# print(num_encoder_tokens)
encoder = english_tokenize.texts_to_sequences(data["english"])
# print(encoder[:5])
max_encoder_sequence_len = np.max([len(enc) for enc in encoder])
# print(max_encoder_sequence_len)

# french tokenizer
french_tokenize = Tokenizer(filters="#$%&amp;()*+,-./:;&lt;=&gt;@[\\]^_`{|}~\t\n")
french_tokenize.fit_on_texts(data["french"])
num_decoder_tokens = len(french_tokenize.word_index)+1
# print(num_decoder_tokens)
decoder = french_tokenize.texts_to_sequences(data["french"])
# print(decoder[:5])
max_decoder_sequence_len = np.max([len(dec) for dec in decoder])
# print(max_decoder_sequence_len)

</pre>
<p>This code performs tokenisation and sequence preprocessing with the Tokenizer class in TensorFlow.</p>
<ol>
<li><strong>english_tokenize = Tokenizer(filters=&#8217;#$%&amp;()*+,-./:;&lt;=&gt;@[\]^_`{|}~\t\n&#8217;)</strong> Initialises a tokenizer object for English sentences. The <strong>filters</strong> parameter specifies characters to be filtered out during tokenisation. We have already filtered the data in the cleaning process, so it is not really necessary to filter again here.</li>
<li><strong>english_tokenize.fit_on_texts(data[&#8220;english&#8221;])</strong> Updates the internal vocabulary of the tokenizer based on the English sentences in the variable <strong>data</strong>. Each word in the vocabulary is assigned a unique index.</li>
<li><strong>num_encoder_tokens = len(english_tokenize.word_index) + 1</strong> Determines the number of unique tokens (words) in the English vocabulary. The <strong>word_index</strong> attribute of the tokeniser returns a dictionary that maps words to their respective indices.</li>
<li><strong>encoder = english_tokenize.texts_to_sequences(data[&#8220;english&#8221;])</strong> Converts the English sentences in the variable <strong>data</strong> into sequences of token indices using the tokenizer. Each sentence is replaced by a sequence of integers representing the corresponding words.</li>
<li><strong>max_encoder_sequence_len = np.max([len(enc) for enc in encoder])</strong> Calculates the maximum length (number of tokens) among all encoded sequences. It uses the <strong>max</strong> function of NumPy to find the maximum value in a list comprehension.</li>
</ol>
<p>These steps help to prepare the sentences for further processing in an NLP model. This is necessary for both languages!</p>
<p>The sentences have now been tokenised, then converted into sequences of token indices and the maximum sequence length determined. An example sentence from the dataset now looks like this: [[148], [252], [59], [14], [111]]. Here, 148 could stand for &#8220;I&#8221;, 252 for &#8220;am&#8221;, 59 for &#8220;very&#8221;, 14 for &#8220;hungry&#8221; and 111 for &#8220;now&#8221;.</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="atomic">idx_2_txt_decoder = {k: i for i, k in french_tokenize.word_index.items()}
# print(idx_2_txt_decoder)
idx_2_txt_encoder = {k: i for i, k in english_tokenize.word_index.items()}
# print(idx_2_txt_encoder)

idx_2_txt_decoder[0] = "&lt;pad&gt;"
idx_2_txt_encoder[0] = "&lt;pad&gt;"</pre>
<p>The code snippet <strong>idx_2_txt_encoder = {k: i for i, k in english_tokenize.word_index.items()}</strong> creates a dictionary directory <strong>idx_2_txt_encoder</strong> that maps token indices to the corresponding words in the English vocabulary: <strong>{k: i for i, k in english_tokenize.word_index.items()}</strong> is a dictionary that iterates over the key-value pairs in <strong>english_tokenize.word_index</strong>. At each iteration, the key (<strong>k</strong>) represents a word in the vocabulary, and the value (<strong>i</strong>) represents the corresponding index. Understanding creates a new dictionary whose keys are the indices (<strong>i</strong>) and the values are the words (<strong>k</strong>).</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1395 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed.png" alt="index 2 tokenizer dicitonary sample" width="1730" height="253" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed.png 1742w, https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed-300x44.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed-1024x150.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed-768x112.png 768w, https://nerd-corner.com/wp-content/uploads/2023/05/index2textencoder-printed-1536x225.png 1536w" sizes="auto, (max-width: 1730px) 100vw, 1730px" /></p>
<p>The resulting <strong>idx_2_txt_encoder</strong> &#8211; dictionary allows you to look up the word corresponding to a particular index in the English vocabulary. <strong>english_tokenize.word_index</strong>, by the way, would swap the displays exactly. Here the key would be the word and the value the index. The second line, <strong>idx_2_txt_encoder[0] = &#8220;&lt;pad&gt;&#8221;</strong>, adds a special entry to the dictionary. Here, the word &#8220;&lt;pad&gt;&#8221; is assigned to index &#8220;0&#8221; to specify a padding token that is used when padding sequences.</p>
<p>Afterwards, one should save the dictionary directory, because later when the model has been trained and is used, the translations of the model will also be a series of indices that are transformed back into readable sentences with the help of the dictionary.</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="atomic"># Saving the dicitionaries
pickle.dump(idx_2_txt_encoder, open("./saves/idx_2_word_input.txt", "wb"))
pickle.dump(idx_2_txt_decoder, open("./saves/idx_2_word_target.txt", "wb"))</pre>
<p>The same process as in Python can also be constructed for the NLP application in Tensorflow.js. Of course, you need a little more lines of code and the overall workload is higher. The first hurdle here is the tokeniser. Unfortunately, unlike Tensorflow (Python), Tfjs does not have its own tokenizer. After extensive research, I luckily found the natural.WordTokenizer. I would like to point out here that a Node.js project is definitely required. Tfjs can be integrated via a &lt;script&gt; tag, but the natural.WordTokenizer cannot!</p>
<p>Another important point is that the WordTokenizer removes &#8220;&lt;&#8221; and &#8220;&gt;&#8221;. An output sentence &#8220;&lt;start&gt; I eat &lt;end&gt;&#8221; therefore simply becomes [&#8216;start&#8217;, &#8216;I&#8217;, &#8216;eat&#8217;, &#8216;end&#8217;]. Thus the &#8220;&lt;start&gt;&#8221; and &#8220;&lt;end&gt;&#8221; tokens are no longer clearly recognisable! I have therefore replaced them in the JS code from the beginning with &#8220;startToken&#8221; and &#8220;endToken&#8221;.</p>
<p>First, we tokenise every single sentence from the dataset again and then create a vocabulary dictionary for each of the two languages. Finally, we replace all the words with the indexes from the vocabulary dictionary:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-theme="atomic">const natural = require("natural");

function tokenize(data) {
  const tokenizer = new natural.WordTokenizer();

  enData = data.map((row) =&gt; tokenizer.tokenize(row.en));
  deData = data.map((row) =&gt; tokenizer.tokenize(row.de));

  const enVocabulary = new Map();
  const deVocabulary = new Map();

  // Insert &lt;pad&gt; at index 0
  enVocabulary.set("&lt;pad&gt;", 0);
  deVocabulary.set("&lt;pad&gt;", 0);

  const fillVocabulary = (langData, vocabMap) =&gt; {
    langData.forEach((sentence) =&gt; {
      sentence.forEach((word) =&gt; {
        if (!vocabMap.has(word)) {
          const newIndex = vocabMap.size;
          vocabMap.set(word, newIndex);
        }
      });
    });
  };

  fillVocabulary(enData, enVocabulary);
  fillVocabulary(deData, deVocabulary);

  // Replace words with indexes
  const indexedEnData = enData.map((element) =&gt;
    element.map((word) =&gt; enVocabulary.get(word))
  );
  const indexedDeData = deData.map((element) =&gt;
    element.map((word) =&gt; deVocabulary.get(word))
  );

  return { en: indexedEnData, de: indexedDeData };
}</pre>
<p>In order to be able to convert the results of our model back into words later, and in order to be able to use the model later in the real application case, we save the two vocabulary dictionaries. I have swapped the key and value pairs, but in the end this is not mandatory:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-theme="atomic">const fs = require("fs");

// store the input and output key value pairs
  fs.writeFileSync(
    "vocabulary/inputVocableSet.json",
    JSON.stringify(switchKeysAndValues(Object.fromEntries(enVocabulary)))
  );
  fs.writeFileSync(
    "vocabulary/outputVocableSet.json",
    JSON.stringify(switchKeysAndValues(Object.fromEntries(deVocabulary)))
  );

function switchKeysAndValues(obj) {
  const switchedObj = {};
  for (const key in obj) {
    if (obj.hasOwnProperty(key)) {
      const value = obj[key];
      switchedObj[value] = key;
    }
  }
  return switchedObj;
}</pre>
<p>As a result we get a JSON object with our vocabulary:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1411 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/06/excerpt-of-the-stored-json-map.png" alt="excerpt of the stored json map" width="240" height="553" srcset="https://nerd-corner.com/wp-content/uploads/2023/06/excerpt-of-the-stored-json-map.png 249w, https://nerd-corner.com/wp-content/uploads/2023/06/excerpt-of-the-stored-json-map-130x300.png 130w" sizes="auto, (max-width: 240px) 100vw, 240px" /></p>
<p>We then return the result of our function:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1412 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/06/result-after-tokenization.png" alt="result after tokenization" width="890" height="679" srcset="https://nerd-corner.com/wp-content/uploads/2023/06/result-after-tokenization.png 895w, https://nerd-corner.com/wp-content/uploads/2023/06/result-after-tokenization-300x229.png 300w, https://nerd-corner.com/wp-content/uploads/2023/06/result-after-tokenization-768x586.png 768w" sizes="auto, (max-width: 890px) 100vw, 890px" /></p>
<h2>Files for download</h2>
<ul>
<li><a  data-e-Disable-Page-Transition="true" class="download-link" title="" href="https://nerd-corner.com/download/1432/?tmstv=1756411863" rel="nofollow" id="download-link-1432" data-redirect="false" >
	NLP Tensorflow.js code (model has an error!)</a>
</li>
</ul>
<p>The post <a href="https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-1/">NLP Application: Tensorflow.js vs Tensorflow Python &#8211; Part 1</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nerd-corner.com/nlp-application-tensorflow-js-vs-tensorflow-python-part-1/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Tensorflow.js vs Tensorflow (Python)</title>
		<link>https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/</link>
					<comments>https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/#respond</comments>
		
		<dc:creator><![CDATA[Nerds]]></dc:creator>
		<pubDate>Thu, 18 May 2023 14:11:37 +0000</pubDate>
				<category><![CDATA[Software]]></category>
		<category><![CDATA[CUDA]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[programming]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[software comparison]]></category>
		<category><![CDATA[Tensorflow]]></category>
		<category><![CDATA[Tensorflow Anleitung]]></category>
		<category><![CDATA[tensorflow comparison]]></category>
		<category><![CDATA[TensorFlow GPU]]></category>
		<category><![CDATA[Tensorflow guide]]></category>
		<category><![CDATA[Tensorflow python]]></category>
		<category><![CDATA[Tensorflow.js]]></category>
		<category><![CDATA[Tensorflow.js vs Tensorflow]]></category>
		<category><![CDATA[TPU]]></category>
		<guid isPermaLink="false">https://nerd-corner.com/de/?p=1384</guid>

					<description><![CDATA[<p>In the world of machine learning and deep learning, there are a variety of tools and libraries that help developers create and train advanced models. &#8230; </p>
<p>The post <a href="https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/">Tensorflow.js vs Tensorflow (Python)</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the world of machine learning and deep learning, there are a variety of tools and libraries that help developers create and train advanced models. Two popular options are Tensorflow.js and the original Tensorflow, developed for JavaScript and Python respectively. Due to a project where I want to use NLP (Natural Language Processing) to automatically convert standard language texts into dialect, I took a closer look at the two options. In this blog article I will discuss the differences and advantages of Tensorflow.js vs Tensorflow (Python). Afterwards, everyone should be able to decide between Tensorflow.js vs Tensorflow (Python).</p>
<p><em><strong>This might also be interesting for you:</strong> <a href="https://nerd-corner.com/enable-tensorflow-gpu-under-windows/">How to enable GPU for TensorFlow in Windows</a></em></p>
<h2>Language and environment</h2>
<p>Tensorflow.js was developed specifically for JavaScript, which means that models can be implemented directly in JavaScript code. This allows seamless integration into web applications and execution of models in the browser without additional server infrastructure.</p>
<p>Tensorflow Python provides a comprehensive Python library for machine learning. Python is a widely used programming language in the field of machine learning and offers a variety of libraries and frameworks to facilitate model development and deployment.</p>
<h2>Target platform</h2>
<p>One of the greatest strengths of Tensorflow.js is the ability to run models directly in the browser. This allows developers to create interactive web applications with machine learning without the user having to install additional software or send external server requests.</p>
<p>Tensorflow Python, on the other hand, allows models to be developed and run on multiple platforms, including desktop computers, servers and mobile devices. It offers a wide range of features and supports advanced techniques such as training models on GPUs or TPUs. TPU stands for &#8220;Tensor Processing Unit&#8221;. These are application-specific chips to accelerate machine learning applications.</p>
<h2>Community</h2>
<p>To be honest, the Tensorflow.js community is simply too small. With many questions, one has the feeling that one is doing &#8220;pioneering work&#8221; here, which is not recommended, especially for beginners. To be more specific: There are 80 times more StackOverflow questions about TensorFlow than about TensorFlow.js. However, depending on the use case, a number of resources, tutorials and examples now also exist for TensorFlow.js that can help with development with Tensorflow.js. Since JavaScript is a widely used language, one can also access a large amount of general web development resources that can help when working with Tensorflow.js.</p>
<p>Tensorflow Python, on the other hand, benefits from a large and active community that regularly develops and publishes new models, techniques and resources. There are a variety of tutorials, forums and open source projects that can help you advance your machine learning projects. The Python community is known for its support and collaboration, which can make it easier to get started with Tensorflow Python.</p>
<h2>Libraries</h2>
<p>In many cases, and specifically for my NLP project, a knockout criterion for Tensorflow.js was the libraries. Tensorflow.js has far less libraries compared to Tensorflow Python. In my view, this is due to the following 3 points:</p>
<ol>
<li>Development status: Tensorflow Python exists for several years and has had a long development period. During this time, numerous extensions, modules and additional libraries have been developed specifically for Tensorflow Python. Tensorflow.js, on the other hand, is a comparatively newer technology and may still be in an earlier stage of development. Therefore, Tensorflow.js may have fewer libraries and extensions developed specifically for this platform.</li>
<li>Target platform: Tensorflow Python targets a wide range of platforms, including desktop computers, servers and mobile devices. As a result, there are a variety of specialised libraries and extensions for different use cases and hardware configurations. Tensorflow.js, on the other hand, aims to run models directly in the browser. Therefore, the functions and libraries of Tensorflow.js are optimised for the requirements of web applications and the limited resource availability in the browser.</li>
<li>Compatibility: Tensorflow.js is based on JavaScript, a language primarily used for web development. Although JavaScript has a large developer community and many existing libraries and frameworks, not all of them are directly compatible with Tensorflow.js. Therefore, not all available libraries and extensions for Tensorflow Python may also be available for Tensorflow.js.</li>
</ol>
<p>However, it is important to note that Tensorflow.js is constantly evolving and the library and available extensions may grow over time. The community around Tensorflow.js is working to expand the ecosystem and provide new libraries as well as tools to improve the functionality and expand the possibilities of Tensorflow.js.</p>
<h2>Platform support</h2>
<p>Written in C++, TensorFlow is a cross-platform library supported on various operating systems including Windows, macOS and Linux. It can be used to develop and run models on servers as well as desktop computers.</p>
<p>TensorFlow therefore supports a wider range of platforms than TensorFlow.js, as TensorFlow.js is mainly focused on JavaScript environments such as the browser and Node.js.</p>
<h2>Training large models</h2>
<p>TensorFlow can train and process very large models, whereas TensorFlow.js is limited to smaller models due to the performance limitations of JavaScript engines.</p>
<p>JavaScript engines are less powerful compared to specialised machine learning frameworks and hardware accelerators such as GPUs or TPUs. This means that TensorFlow.js is usually suitable for smaller models due to the limited processing power and memory of JavaScript engines.</p>
<p>Therefore, TensorFlow is recommended for projects where large models need to be trained or complex calculations need to be performed, while TensorFlow.js is better suited for applications that use smaller models and are intended to run in web browsers or JavaScript environments.</p>
<h2>Why TensorFlow.js in the browser?</h2>
<p>There is the issue of speed. Since you don&#8217;t have to send data to a remote server, classification is faster. In addition, you have direct access to sensors such as camera, microphone, GPS, etc.</p>
<p>In addition, data protection is an important issue in many countries. You can train and classify data on your own computer without having to send it to an external web server. This may be necessary to comply with data protection laws such as the DSGVO or if one does not want to share the data with third parties.</p>
<p>With one click, anyone in the world can access and use the application via a link without the need for a complex setup with servers and special hardware such as graphics cards.</p>
<p>Finally, with ML you should also keep an eye on the costs. You only have to pay for hosting the client. This is much cheaper than maintaining your own server permanently.</p>
<h2>Conclusion: Tensorflow.js vs. Tensorflow?</h2>
<p>Overall, the question Tensorflow.js vs. Tensorflow can be answered in a simplified way by saying that Tensorflow with Python is almost ALWAYS the better choice due to its broad acceptance and large <a href="https://www.tensorflow.org/community" target="_blank" rel="noopener">community</a>. Nevertheless, Tensorflow.js is becoming increasingly important and is predominantly used by developers who want to develop web applications with machine learning on the client side.</p>
<p>This means if you want to build apps that run in the web browser, TensorFlow.js is actually the better choice. However, if you want to build apps to run on a server or desktop computer, TensorFlow is the better option. Also, the Python version is much better suited if you want to work with powerful devices like GPUs.</p>
<p>The post <a href="https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/">Tensorflow.js vs Tensorflow (Python)</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Enable TensorFlow GPU under Windows</title>
		<link>https://nerd-corner.com/enable-tensorflow-gpu-under-windows/</link>
					<comments>https://nerd-corner.com/enable-tensorflow-gpu-under-windows/#comments</comments>
		
		<dc:creator><![CDATA[Nerds]]></dc:creator>
		<pubDate>Tue, 09 May 2023 21:40:43 +0000</pubDate>
				<category><![CDATA[Software]]></category>
		<category><![CDATA[Visual Studio]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[graphics card]]></category>
		<category><![CDATA[guide]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Step by step guide]]></category>
		<category><![CDATA[Tensorflow]]></category>
		<category><![CDATA[TensorFlow GPU]]></category>
		<category><![CDATA[Tensorflow guide]]></category>
		<category><![CDATA[user guide]]></category>
		<category><![CDATA[Windows]]></category>
		<guid isPermaLink="false">https://nerd-corner.com/de/?p=1376</guid>

					<description><![CDATA[<p>I am currently working on a hobby project where I want to translate German sentences into Bavarian using AI, for example on the website Dialektl.com. &#8230; </p>
<p>The post <a href="https://nerd-corner.com/enable-tensorflow-gpu-under-windows/">Enable TensorFlow GPU under Windows</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I am currently working on a hobby project where I want to translate German sentences into Bavarian using AI, for example on the website <a href="http://dialektl.com" target="_blank" rel="noopener">Dialektl.com</a>. I am working with Tensorflow, an open source platform for machine learning and deep learning. Tensorflow is one of the most widely used libraries for Deep Learning because it offers a wide range of features and a very active developer community. For almost all machine learning models, the training process is extremely computationally expensive. To increase the speed of the training process, it is recommended to use the computer&#8217;s graphics card (GPU) instead of the processor (CPU).</p>
<p>My naive thought was that Tensorflow would automatically use the GPU. However, you first have to follow the step-by-step instructions below in order for Tensorflow to recognise and use the GPU at all.</p>
<p><em><strong>You might also be interested in:</strong> <a href="https://nerd-corner.com/tensorflow-js-vs-tensorflow-python/">Should I use Tensorflow.js or Tensorflow (Python)?</a></em></p>
<p><strong>DISCLAIMER: As of version 2.11, Tensorflow no longer supports GPUs under Windows! Either change the operating system or downgrade Tensorflow to version 2.10. In addition, a graphics card from NVIDIA is required!</strong></p>
<h3>1. Checking the desired CUDA and cuDNN versions</h3>
<p>First, you should find out which CUDA and cuDNN version you need for TensorFlow. The website lists all versions of Tensorflow and the desired CUDA versions or cuDNN versions:<a href="https://www.tensorflow.org/install/source_windows#gpu"> https://www.tensorflow.org/install/source_windows#gpu</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1372 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/tensorflow_requirements.png" alt="tensorflow requirements" width="1120" height="562" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/tensorflow_requirements.png 1128w, https://nerd-corner.com/wp-content/uploads/2023/05/tensorflow_requirements-300x151.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/tensorflow_requirements-1024x514.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/tensorflow_requirements-768x385.png 768w" sizes="auto, (max-width: 1120px) 100vw, 1120px" /></p>
<h2>2. Checking your own graphics card</h2>
<p>As already mentioned in the disclaimer, the graphics card must be from NVIDIA. On the website <a href="https://developer.nvidia.com/cuda-gpus" target="_blank" rel="noopener">https://developer.nvidia.com/cuda-gpus</a> you can search for your own GPU and check the &#8220;Compute Capability&#8221;. The minimum requirement for Tensorflow is a value of 3.5, but this is fulfilled by all current graphics cards.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1371 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/cuda-architectures.png" alt="cuda architectures" width="1120" height="158" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/cuda-architectures.png 1129w, https://nerd-corner.com/wp-content/uploads/2023/05/cuda-architectures-300x42.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/cuda-architectures-1024x144.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/cuda-architectures-768x108.png 768w" sizes="auto, (max-width: 1120px) 100vw, 1120px" /></p>
<h2>3. Installing the latest NVIDIA drivers</h2>
<p>To enable the GPU for TensorFlow, the latest NVIDIA drivers must be installed. Simply go to the NVIDIA website and download the latest driver for your graphics card.</p>
<h2>4. Install Visual Studio (optional)</h2>
<p>In TensorFlow, some parts of the library have been written in C++ to maximise performance. Therefore, installing Visual Studio can help improve the compatibility and performance of TensorFlow:<a style="font-size: 1.125rem; color: midnightblue; outline: 0px;" href="https://visualstudio.microsoft.com/de/vs/"> https://visualstudio.microsoft.com/de/vs/</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1370 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/vs-studio-download.png" alt="Visual studio installer" width="690" height="304" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/vs-studio-download.png 694w, https://nerd-corner.com/wp-content/uploads/2023/05/vs-studio-download-300x132.png 300w" sizes="auto, (max-width: 690px) 100vw, 690px" /></p>
<p>It is sufficient to select individual components here. I have selected everything with C++ in &#8220;Compiler, Buildtools and Runtimes&#8221; and also MS Build, which in turn automatically installs a few more components. All in all, however, over 25 GB!</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1369 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/VS-studio-components.png" alt="choose components in visual studio" width="880" height="751" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/VS-studio-components.png 885w, https://nerd-corner.com/wp-content/uploads/2023/05/VS-studio-components-300x256.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/VS-studio-components-768x655.png 768w" sizes="auto, (max-width: 880px) 100vw, 880px" /></p>
<h2>5. Install CUDA Toolkit</h2>
<p>The CUDA Toolkit is a toolkit for CUDA application development provided by NVIDIA. TensorFlow requires CUDA to run on the GPU. Simply download and install the version of the CUDA toolkit requested in step 1 from the NVIDIA website: <a href="https://developer.nvidia.com/cuda-downloads">https://developer.nvidia.com/cuda-downloads</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1368 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/Cuda-toolkit-download.png" alt="download cuda toolkit" width="1130" height="889" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/Cuda-toolkit-download.png 1135w, https://nerd-corner.com/wp-content/uploads/2023/05/Cuda-toolkit-download-300x236.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/Cuda-toolkit-download-1024x806.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/Cuda-toolkit-download-768x604.png 768w" sizes="auto, (max-width: 1130px) 100vw, 1130px" /></p>
<h2>6. Installing the cuDNN libraries</h2>
<p>cuDNN is a library of deep learning primitives provided by NVIDIA. TensorFlow also requires cuDNN to run on the GPU. cuDNN is free, but you have to create an account as NVIDIA Developer: <a href="https://developer.nvidia.com/rdp/cudnn-download">https://developer.nvidia.com/rdp/cudnn-download</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1367 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download.png" alt="cuDNN download" width="1710" height="875" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download.png 1717w, https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download-300x154.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download-1024x524.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download-768x393.png 768w, https://nerd-corner.com/wp-content/uploads/2023/05/cuDNN-download-1536x786.png 1536w" sizes="auto, (max-width: 1710px) 100vw, 1710px" /></p>
<p>After the download, the content must be unpacked. The content is moved to the &#8220;NVIDIA GPU Computing Toolkit&#8221; in the &#8220;Programs&#8221; folder. After moving, copy the file path of the &#8220;bin&#8221; folder.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1366 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/unzip-cuDNN.png" alt="unzip cuDNN" width="670" height="226" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/unzip-cuDNN.png 673w, https://nerd-corner.com/wp-content/uploads/2023/05/unzip-cuDNN-300x101.png 300w" sizes="auto, (max-width: 670px) 100vw, 670px" /></p>
<h2>7. Set PATH variable</h2>
<p>Open the environment variables by simply typing the term into the Windows search. Then edit the system variable &#8220;path&#8221; and add a new entry with the file path of the &#8220;bin&#8221; folder from the previous step.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1365 zoooom" src="https://nerd-corner.com/wp-content/uploads/2023/05/path-variable.png" alt="Set path variable for tensorflow gpu" width="1220" height="203" srcset="https://nerd-corner.com/wp-content/uploads/2023/05/path-variable.png 1223w, https://nerd-corner.com/wp-content/uploads/2023/05/path-variable-300x50.png 300w, https://nerd-corner.com/wp-content/uploads/2023/05/path-variable-1024x170.png 1024w, https://nerd-corner.com/wp-content/uploads/2023/05/path-variable-768x127.png 768w" sizes="auto, (max-width: 1220px) 100vw, 1220px" /></p>
<h2>8. Creating a virtual environment</h2>
<p>It is recommended to install TensorFlow in a virtual environment to avoid conflicts with other Python packages. Ideally using Anaconda. Simply download and install here: <a href="https://www.anaconda.com/download">https://www.anaconda.com/download</a></p>
<p>Then start the Anaconda Navigator. Create a new environment under Environments &gt; Create. Then click on Home again and launch an existing Python IDE from here. Additional IDEs such as PyCharm should also be automatically displayed here after the download.</p>
<h2>9. Verify Tensorflow GPU support</h2>
<p>To check whether the TensorFlow GPU support was successfully detected, the Tensorflow package should first be installed in the Python IDE. But note that under Windows the GPU is only recognised up to version 2.10! In the following versions it is no longer recognised! Tensorflow 2.10 must therefore be installed.</p>
<p>Then run the following code to display a list of available GPUs:</p>
<pre class="EnlighterJSRAW" data-enlighter-language="python">import tensorflow as tf

tf.config.list_physical_devices('GPU')</pre>
<p>&nbsp;</p>
<p>The post <a href="https://nerd-corner.com/enable-tensorflow-gpu-under-windows/">Enable TensorFlow GPU under Windows</a> appeared first on <a href="https://nerd-corner.com">Nerd Corner</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nerd-corner.com/enable-tensorflow-gpu-under-windows/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
