Table of Contents
What is Artificial Intelligence?
In this post, you’ll learn more about Artificial Intelligence (AI) which has a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. Understanding how Artificial Intelligence works will enable you to create GUIs with Python Windows GUI Builder with ease.
Why is Artificial Intelligence important?
AI is important because it can give enterprises insights into their operations that they may not have been aware of previously. And more importantly, in some cases, AI can perform tasks better than humans.
AI tools often complete jobs quickly and with relatively few errors when it comes to repetitive, detail-oriented tasks like analyzing large numbers of documents to gain insights, make decisions, risk management, and complete specific tasks or actions.
Relation between Artificial Intelligence, Machine Learning, Neural Networks, and Deep Learning?
Here is a good analogy created by IBM [1]:
“Perhaps the easiest way to think about Artificial Intelligence, Machine Learning, Neural Networks, and Deep Learning is to think of them like Russian nesting dolls. Each is essentially a component of the prior term.”
Or, in other words: Machine Learning is a subfield of Artificial Intelligence. Deep Learning is a subfield of Machine Learning, and Neural Networks make up the backbone of Deep Learning algorithms. The number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three.
Why use Python for AI?
AI requires a foundation of specialized hardware and software for writing and training Machine Learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.
In this article, we will limit our tutorial only for implementation of Python for AI/ML. The reason why Python implementations are so popular, because of its easy to use, popular, mature, and community supported AI/ML libraries.
Delphi adds Powerful GUI features and functionalities to Python
In this tutorial, we’ll build Windows Apps with extensive AI capabilities by integrating Python’s AI libraries with Embarcadero’s Delphi, using Python4Delphi (P4D).
P4D empowers Python users with Delphi’s award-winning VCL functionalities for Windows which enables us to build native Windows apps 5x faster. This integration enables us to create a modern GUI with Windows 10 looks and responsive controls for our Python for AI applications. Python4Delphi also comes with an extensive range of demos, use cases, and tutorials.
We’re going to cover the following…
How to use Keras, TensorFlow, scikit-learn, PyTorch, NLTK, Gensim, OpenCV, EasyOCR, Seaborn, and Bokeh Python libraries for Artificial Intelligence
All of them would be integrated with Python4Delphi to create Windows Apps with AI capabilities.
Prerequisites
Before we begin to work, download and install the latest Python for your platform. Follow the Python4Delphi installation instructions mentioned here. Alternatively, you can check out the easy instructions found in the Getting Started With Python4Delphi video by Jim McKeeth.
How do I get started building with Python GUI?
First, open and run our Python GUI using project Demo01
from Python4Delphi
with RAD Studio. Then insert the script into the lower Memo
, click the Execute script
button, and get the result in the upper Memo
. You can find the Demo01
source on GitHub. The behind the scene details of how Delphi manages to run your Python code in this amazing Python GUI can be found at this link.
1. How can I build an AI solution with Keras?
Keras is a high-level neural networks API for Python. Keras acts as an interface for the TensorFlow library. As a central part of the tightly connected TensorFlow 2.0 ecosystem, Keras is covering every step of the Machine Learning workflow, from data management to hyperparameter training to deployment solutions.
Keras is designed for human beings, not machines. Keras follows best practices for reducing cognitive load: It offers consistent and simple APIs, minimizes the number of user actions required for common use cases, and it provides clear and actionable error messages.
Its popularity? You don’t need to worry! Keras has extensive documentation and developer guides and is also the most used deep learning framework among the top-5 winning teams on Kaggle. Keras was also the 10th most cited tool in the KDnuggets 2018 software poll and registered a 22% usage. Keras is used by CERN, NASA, NIH, and many more scientific organizations around the world (and yes, Keras is used at the LHC).
Are you looking for a simple, flexible, and powerful deep learning library, and build a nice GUI for them? You can deliver enterprise-grade AI solutions easily by combining Keras and Python4Delphi library, inside Delphi and C++Builder.
First, here is how you can get keras
:
1 |
pip install keras |
The following is a code example of keras
to prepare and visualize the famous Kaggle’s Cats vs Dogs dataset for Deep Learning (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers # Filter out corrupted images import os num_skipped = 0 for folder_name in ("Cat", "Dog"): folder_path = os.path.join("PetImages", folder_name) for fname in os.listdir(folder_path): fpath = os.path.join(folder_path, fname) try: fobj = open(fpath, "rb") is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10) finally: fobj.close() if not is_jfif: num_skipped += 1 # Delete corrupted image os.remove(fpath) print("Deleted %d images" % num_skipped) # Split the dataset into the training and validation set image_size = (180, 180) batch_size = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( "PetImages", validation_split=0.2, subset="training", seed=1337, image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( "PetImages", validation_split=0.2, subset="validation", seed=1337, image_size=image_size, batch_size=batch_size, ) # Visualize the data import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(int(labels[i])) plt.axis("off") plt.show() |
What does the final keras
result look like in Python GUI?
Read more:
Watch the demo here:
2. How can I build AI platforms with TensorFlow?
TensorFlow is an open-source software library for high-performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.
Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.
First, here is how you can get tensorflow
:
1 |
pip install tensorflow |
These are the script to train your first neural networks in Python GUI by Python4Delphi
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
import tensorflow as tf # Load and prepare the dataset mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # Build the tf.keras.Sequential model by stacking layers. # Choose an optimizer and loss function for training: model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) predictions = model(x_train[:1]).numpy() predictions # Convert the logits to probabilities for each class tf.nn.softmax(predictions).numpy() loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) loss_fn(y_train[:1], predictions).numpy() # Compile the deep learning model model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) # Fitting, adjust the model parameters to minimize the loss: model.fit(x_train, y_train, epochs=5) # Evaluate the model model.evaluate(x_test, y_test, verbose=2) # Attach the softmax layer probability_model = tf.keras.Sequential([ model, tf.keras.layers.Softmax() ]) probability_model(x_test[:5]) |
What does the tensorflow
example look like in the Python GUI?
Read more:
Watch the demo here:
3. How do I build GUI for collections of ML algorithms with scikit-learn?
scikit-learn is an open-source Python library for Machine Learning. Scikit-Learn has simple and efficient tools for predictive data analysis that are built on top of SciPy, NumPy, and Matplotlib.
scikit-Learn features various classification, regression, and clustering algorithms including support vector machines, random forests, gradient boosting, k-means, and DBSCAN.
First, here is how you can get scikit-learn
:
1 |
pip install -U scikit-learn |
The following is a code example of sklearn
to compare several classifiers in scikit-learn on synthetic datasets (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis h = .02 # step size in the mesh names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA"] classifiers = [ KNeighborsClassifier(3), SVC(kernel="linear", C=0.025), SVC(gamma=2, C=1), GaussianProcessClassifier(1.0 * RBF(1.0)), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1, max_iter=1000), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis()] X, y = make_classification(n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) datasets = [make_moons(noise=0.3, random_state=0), make_circles(noise=0.2, factor=0.5, random_state=1), linearly_separable ] figure = plt.figure(figsize=(27, 9)) i = 1 # iterate over datasets for ds_cnt, ds in enumerate(datasets): # preprocess dataset, split into training and test part X, y = ds X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=42) x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) ax = plt.subplot(len(datasets), len(classifiers) + 1, i) if ds_cnt == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # Plot the testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors='k') ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) i += 1 # iterate over classifiers for name, clf in zip(names, classifiers): ax = plt.subplot(len(datasets), len(classifiers) + 1, i) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. if hasattr(clf, "decision_function"): Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) else: Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] # Put the result into a color plot Z = Z.reshape(xx.shape) ax.contourf(xx, yy, Z, cmap=cm, alpha=.8) # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # Plot the testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors='k', alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) if ds_cnt == 0: ax.set_title(name) ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'), size=15, horizontalalignment='right') i += 1 plt.tight_layout() plt.show() |
What does the sklearn
classifier comparisons result look like?
Read more:
4. How do I perform advanced machine learning (ML) with PyTorch?
PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab (FAIR). It is free and open-source software released under the Modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.
Many pieces of deep learning software are built on top of PyTorch, including Tesla Autopilot, Uber’s Pyro, HuggingFace’s Transformers, PyTorch Lightning, and Catalyst.
PyTorch provides two high-level features:
- Tensor computing (like NumPy) with strong acceleration via graphics processing units (GPU)
- Deep neural networks built on a tape-based automatic differentiation system
You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
First, here is how you can get pytorch
:
1 |
pip install torch |
Let’s try to load and visualize the FashionMNIST
dataset from torchvision
. Fashion-MNIST is a dataset of Zalando’s article images consisting of 60000
train
ing examples and 10000
test
examples. Each example comprises a 28×28
grayscale image and an associated label from one of 10 classes (0: "T-Shirt"
, 1: "Trouser"
, 2: "Pullover"
, 3: "Dress"
, 4: "Coat"
, 5: "Sandal"
, 6: "Shirt"
, 7: "Sneaker"
, 8: "Bag"
, 9: "Ankle Boot"
).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
import torch from torch.utils.data import Dataset from torchvision import datasets from torchvision.transforms import ToTensor, Lambda import matplotlib.pyplot as plt training_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor() ) test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor() ) labels_map = { 0: "T-Shirt", 1: "Trouser", 2: "Pullover", 3: "Dress", 4: "Coat", 5: "Sandal", 6: "Shirt", 7: "Sneaker", 8: "Bag", 9: "Ankle Boot", } figure = plt.figure(figsize=(8, 8)) cols, rows = 3, 3 for i in range(1, cols * rows + 1): sample_idx = torch.randint(len(training_data), size=(1,)).item() img, label = training_data[sample_idx] figure.add_subplot(rows, cols, i) plt.title(labels_map[label]) plt.axis("off") plt.imshow(img.squeeze(), cmap="gray") plt.show() |
pytorch
simple example:
Read more:
5. How can we work with natural languages using NLTK?
NLTK is a leading platform for building Python programs to work with human language data. Natural Language Processing or NLP for short — in a wide sense, to cover any kind of computer manipulation of natural language. NLP is a field in Machine Learning with the ability of a computer to understand, analyze, manipulate, and potentially generate human language.
NLTK provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum.
First, here is how you can get nltk
:
1 |
pip install nltk |
Practical work in Natural Language Processing typically uses large bodies of linguistic data or corpora. You can add the popular NLTK datasets to your system using this command:
1 |
python -m nltk.downloader popular |
The following is a code example of nltk
to create a classifier app that could predict gender from the people’s name as input (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# Importing libraries import random from nltk.corpus import names import nltk def gender_features(word): return {'last_letter':word[-1]} # Preparing a list of examples and corresponding class labels. labeled_names = ([(name, 'male') for name in names.words('male.txt')]+ [(name, 'female') for name in names.words('female.txt')]) random.shuffle(labeled_names) # We use the feature extractor to process the names data. featuresets = [(gender_features(n), gender) for (n, gender)in labeled_names] # Divide the resulting list of feature sets into a training set and a test set. train_set, test_set = featuresets[500:], featuresets[:500] # The training set is used to train a new "naive Bayes" classifier. classifier = nltk.NaiveBayesClassifier.train(train_set) print(classifier.classify(gender_features('Sherlock'))) # Output should be 'male' print(nltk.classify.accuracy(classifier, train_set)) # Show most informative features classifier.show_most_informative_features(10) |
Here is the nltk
demo result in the Python GUI
Read more:
6. How can I perform advanced natural language processing (NLP) with Gensim?
Gensim is an open-source library for Unsupervised Topic Modeling and Natural Language Processing, using Modern Statistical Machine Learning. Gensim has been used and cited in over 1400 commercial and academic applications as of 2018, in a diverse array of disciplines from medicine to insurance claim analysis to patent search.
Design principles of Gensim:
- Practicality – As industry experts, they focus on proven, battle-hardened algorithms to solve real industry problems. More focus on engineering, less on academia.
- Memory independence – There is no need for the whole training corpus to reside fully in RAM at any one time. Can process large, web-scale corpora using data streaming.
- Performance – Highly optimized implementations of popular vector space algorithms using C, BLAS, and memory mapping.
By now, Gensim is known to be the most robust, efficient, and hassle-free piece of software to realize unsupervised semantic modeling from plain text.
First, here is how you can get gensim
:
1 |
pip install gensim |
The following is a code example of gensim
to perform similarity queries (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # Creating the Corpus from collections import defaultdict from gensim import corpora documents = [ "Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey", ] # Remove common words and tokenize stoplist = set('for a of the and to in'.split()) texts = [ [word for word in document.lower().split() if word not in stoplist] for document in documents ] # Remove words that appear only once frequency = defaultdict(int) for text in texts: for token in text: frequency[token] += 1 texts = [ [token for token in text if frequency[token] > 1] for text in texts ] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] # Similarity interface from gensim import models lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2) doc = "Human computer interaction" vec_bow = dictionary.doc2bow(doc.lower().split()) vec_lsi = lsi[vec_bow] # Convert the query to LSI space print(vec_lsi) # We will be considering `cosine similarity <http://en.wikipedia.org/wiki/Cosine_similarity>`_ # to determine the similarity of two vectors. # Initializing query structures from gensim import similarities index = similarities.MatrixSimilarity(lsi[corpus]) # Transform corpus to LSI space and index it index.save('C:/Users/ASUS/deerwester.index') index = similarities.MatrixSimilarity.load('C:/Users/ASUS/deerwester.index') # Performing queries sims = index[vec_lsi] # Perform a similarity query against the corpus print(list(enumerate(sims))) # Print (document_number, document_similarity) 2-tuples # Cosine measure returns similarities in the range `<-1, 1>` (the greater, the more similar), # so that the first document has a score of 0.99809301 etc. sims = sorted(enumerate(sims), key=lambda item: -item[1]) for doc_position, doc_score in sims: print(doc_score, documents[doc_position]) |
What does the gensim
result look in the Python GUI?
Read more:
7. How to get started easily in computer vision with OpenCV?
OpenCV (Open Source Computer Vision Library) is an open-source Computer Vision and Machine Learning software library. OpenCV was built to provide a common infrastructure for Computer Vision applications and to accelerate the use of machine perception in commercial products. OpenCV supports various programming languages including Python.
OpenCV has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art Computer Vision and Machine Learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high-resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc.
First, here is how you can get opencv
to work with Python4Delphi to create GUI with Computer Vision and Machine Learning capabilities:
1 |
pip install opencv-python |
Note: This is an unofficial pre-built CPU-only OpenCV package for Python.
The following is a code example of opencv
to perform perspective transformation of an image (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
import cv2 import numpy as np import matplotlib.pyplot as plt image = cv2.imread("C:/Users/YOUR_USERNAME/got.jpg") pts1 = np.float32([[535,145],[625,145],[535,250],[625,250]]) pts2 = np.float32([[0,0],[400,0],[0,400],[400,400]]) M = cv2.getPerspectiveTransform(pts1,pts2) dst = cv2.warpPerspective(image,M,(400,400)) plt.subplot(121),plt.imshow(image),plt.title('Input') plt.subplot(122),plt.imshow(dst),plt.title('Output') plt.show() |
Here is the opencv
result in Python GUI
Read more:
8. How can I automatically recognize printed characters in images with EasyOCR?
EasyOCR, as the name suggests, is a Python package that allows computer vision developers to effortlessly perform Optical Character Recognition. EasyOCR provides end-to-end, and ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic, etc.
When it comes to OCR, using EasyOCR is by far the most straightforward way to apply Optical Character Recognition:
- The EasyOCR package can be installed with a single
pip
command. - The dependencies on the EasyOCR package are minimal, making it easy to configure your OCR development environment.
- Once EasyOCR is installed, only one import statement is required to import the package into your project.
- From there, all you need is two lines of code to perform OCR — one to initialize the Reader class and then another to OCR the image via the readtext function.
First, here is how you can get easyocr
:
1 |
pip install easyocr |
Next, we will test the EasyOCR library to detect both Chinese and English characters in this image:
The following is a basic usage of easyocr
to detect both Chinese (ch_sim
) and English (en
) characters in the sample image above (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 |
import os os.system('cmd /k "chcp 936"') import easyocr reader = easyocr.Reader(['ch_sim','en']) result = reader.readtext('C:/Users/YOUR_USERNAME/chinese2.jpg') print(result) |
What does the easyocr
optical character recognition result look like?
9. How to gain insight instantly from datasets with Seaborn?
Seaborn is a library for making statistical graphics in Python. It is built on top of matplotlib and is closely integrated with pandas data structures. Seaborn will enhance the matplotlib plotting functionalities.
Here is some of the functionality that Seaborn offers:
- A dataset-oriented API for examining relationships between multiple variables
- Specialized support for using categorical variables to show observations or aggregate statistics
- Options for visualizing univariate or bivariate distributions and for comparing them between subsets of data
- Automatic estimation and plotting of linear regression models for different kinds of dependent variables
- Convenient views onto the overall structure of complex datasets
- High-level abstractions for structuring multi-plot grids that let you easily build complex visualizations
- Concise control over matplotlib figure styling with several built-in themes
- Tools for choosing color palettes that faithfully reveal patterns in your data
Do you want to improve Matplotlib plots using Seaborn, like creating Scatterplot with Varying Point Sizes and Hues, in the Windows GUI app? This section will show you how to get started!
First, here is how you can get seaborn
:
1 |
pip install seaborn |
The following is an introductory example of seaborn
to create Scatterplot with Varying Point Sizes and Hues (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 |
import seaborn as sns sns.set_theme(style="white") # Load the example mpg dataset mpg = sns.load_dataset("mpg") # Plot miles per gallon against horsepower with other semantics sns.relplot(x="horsepower", y="mpg", hue="origin", size="weight", sizes=(40, 400), alpha=.5, palette="muted", height=6, data=mpg) plt.show() |
Here is the seaborn
result in the Python GUI:
10. How to bring your data visualizations to another level with Bokeh?
Bokeh is an interactive visualization library to be shown in modern web browsers. It provides elegant, concise construction of versatile graphics, and affords high-performance interactivity over large or streaming datasets. Bokeh can help anyone who would like to quickly and easily make interactive plots, dashboards, and data applications.
At a glance, Bokeh provides us with:
- Flexible
Bokeh makes it simple to create common plots but also can handle custom or specialized use-cases.
- Interactive
Tools and widgets let you and your audience probe “what if” scenarios or drill down into the details of your data.
- Shareable
Plots, dashboards, and apps can be published on web pages or Jupyter notebooks.
- Productive
Work in Python close to all the PyData tools you are already familiar with.
- Powerful
You can always add custom JavaScript to support advanced or specialized cases.
- Open Source
Everything, including the Bokeh server, is BSD licensed and available on GitHub.
This section will guide you to combine Python4Delphi with the Bokeh library, inside Delphi and C++Builder, from installing Bokeh with pip to Plotting the Interactive Plot of Unemployment Rate.
First, here is how you can get bokeh
:
1 |
pip install bokeh |
The following is a code example of bokeh
to Plotting the Interactive Plot of Unemployment Rate in Texas (run this inside the lower Memo
of Python4Delphi
Demo01
GUI):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
from bokeh.io import show from bokeh.models import LogColorMapper from bokeh.palettes import Viridis6 as palette from bokeh.plotting import figure from bokeh.sampledata.unemployment import data as unemployment from bokeh.sampledata.us_counties import data as counties palette = tuple(reversed(palette)) counties = { code: county for code, county in counties.items() if county["state"] == "tx" } county_xs = [county["lons"] for county in counties.values()] county_ys = [county["lats"] for county in counties.values()] county_names = [county['name'] for county in counties.values()] county_rates = [unemployment[county_id] for county_id in counties] color_mapper = LogColorMapper(palette=palette) data=dict( x=county_xs, y=county_ys, name=county_names, rate=county_rates, ) TOOLS = "pan,wheel_zoom,reset,hover,save" p = figure( title="Texas Unemployment, 2009", tools=TOOLS, x_axis_location=None, y_axis_location=None, tooltips=[ ("Name", "@name"), ("Unemployment rate", "@rate%"), ("(Long, Lat)", "($x, $y)") ]) p.grid.grid_line_color = None p.hover.point_policy = "follow_mouse" p.patches('x', 'y', source=data, fill_color={'field': 'rate', 'transform': color_mapper}, fill_alpha=0.7, line_color="white", line_width=0.5) show(p) |
bokeh
interactive plot result in P4D GUI
Read more:
Are you ready to build awesome things with these Python AI libraries?
We already demonstrate 10 powerful Python libraries for Artificial Intelligence-related tasks (Keras, TensorFlow, scikit-learn, PyTorch, NLTK, Gensim, OpenCV, EasyOCR, Seaborn, and Bokeh). All of them wrapped well inside a powerful GUI provided by Python4Delphi. We can’t wait to see what you build with Python4Delphi!
Want to know some more? Then check out Python4Delphi which easily allows you to build Python GUIs for Windows using Delphi, and
Download RAD Studio to build more powerful Python GUI Windows Apps 5x Faster with Less Code.
References & further readings
[1] Kavlakoglu, E. (2020).
AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference? IBM Blog. ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
[2] Hakim, M. A. (2021).
Article16 – 10 Python’s Artificial Intelligence Libraries. embarcaderoBlog-repo GitHub. github.com/MuhammadAzizulHakim/ embarcaderoBlog-repo/tree/main/Article16%20-%2010%20Python’s%20Artificial%20Intelligence %20Libraries
[3] Hakim, M. A. (2021).
Artificial Intelligence Solutions With Keras Library In A Windows Python App. PythonGUI.org. Embarcadero Technologies. pythongui.org/artificial-intelligence-solutions-with-keras-library-in-a-windows-python-app
[4] Hakim, M. A. (2021).
Build An Artificial Intelligence Solution With TensorFlow Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/build-an-artificial-intelligence-solutions-with-tensorflow-library-in-a-delphi-windows-app
[5] Hakim, M. A. (2021).
Build A Machine Learning Solutions With Scikit-Learn Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/build-a-machine-learning-solutions-with-scikit-learn-library-in-a-delphi-windows-app
[6] Hakim, M. A. (2021).
Build An Artificial Intelligence Solutions With PyTorch Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/build-an-artificial-intelligence-solutions-with-pytorch-library-in-a-delphi-windows-app
[7] Hakim, M. A. (2021).
Quickly Build A Python GUI App With Powerful Natural Language Processing Capabilities Using NLTK Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/quickly-build-a-python-gui-app-with-powerful-natural-language-processing-capabilities-using-nltk-library-in-a-delphi-windows-app
[8] Hakim, M. A. (2021).
Integrate Robust Similarity Queries Capabilities To Your Python GUI App With Powerful Gensim Library. PythonGUI.org. Embarcadero Technologies. pythongui.org/integrate-robust-similarity-queries-capabilities-to-your-python-gui-app-with-powerful-gensim-library
[9] Hakim, M. A. (2021).
Learn To Build A Python GUI For Computer Vision Tasks With Powerful OpenCV Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/learn-to-build-a-python-gui-for-computer-vision-tasks-with-powerful-opencv-library-in-a-delphi-windows-app
[10] Hakim, M. A. (2022).
How To Build A GUI For Optical Character Recognition. PythonGUI.org. Embarcadero Technologies. pythongui.org/how-to-build-a-gui-for-optical-character-recognition
[11] Hakim, M. A. (2021).
3 Ways To Create Enterprise-Grade Graphics Using The Seaborn Library. PythonGUI.org. Embarcadero Technologies. pythongui.org/3-ways-to-create-enterprise-grade-graphics-using-the-seaborn-library
[12] Hakim, M. A. (2021).
Learn To Build A Python GUI For Interactive Data Visualizations Using Bokeh Library In A Delphi Windows App. PythonGUI.org. Embarcadero Technologies. pythongui.org/learn-to-build-a-python-gui-for-interactive-data-visualizations-using-bokeh-library-in-a-delphi-windows-app