Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Tuesday, March 17, 2020

California legislation de-fanged, gives no relief to Amazon sellers

 

As predicted, the California legislation will not make Seller Performance any friendlier

This summer, optimistic Amazon sellers believed there was change in the air. Amazon made changes to its Business Solutions Agreement, and the California legislature began work on a bill that purported to protect sellers from abusive online marketplaces.

Over the last few weeks, as the updated Business Solutions Agreement (BSA) has come into force, it’s unfortunate that my predictions have come true. As I suspected, nothing has really changed in Seller Performance and its actions toward sellers – at least not that anyone on the outside can see. It’s not kinder, it’s not gentler, it’s not more responsive, and there are no 30-day cure notices, as the BSA said there would be.

Also as predicted, the California legislature’s AB-1790  has been watered down to a point that it means almost nothing. The most recent version of the bill strikes out the few provisions that I truly believed would have a positive impact for sellers. At the same time, it’s clear that someone (an Amazon lobbyist?) explained “risk management” to the legislature, as there are now provisos to protect every step of the investigation and enforcement processes in Seller Performance.

Striking out the good

One provision originally in the bill said it would “prohibit a marketplace from destroying products in its possession that are the property of a marketplace seller without offering the marketplace seller a reasonable opportunity to retrieve the marketplace seller’s property.”

That language was struck out completely, however, from the most recent version of the bill.

Why? I’m speculating. But I believe this was far too broad, as it would allow bad actors to place removal orders for counterfeit or stolen goods. The legislation refused to wade further into the devilish details by saying something like, “a reasonable opportunity to retrieve authentic, legitimate goods.” I suppose they believed it would take too much work to define “authentic” or “legitimate,” so they just abandoned the hard work altogether.

Unfortunately, I’ve seen far too many deactivated sellers have their inventory seized and destroyed – even when they attempted to place removal orders and were selling legitimate goods. This will continue to be an uphill battle.

Embracing risk management

The legislature added many provisos to the bill, essentially giving Amazon and other marketplaces complete power to implement their current risk management structure. For example, a section initially said when a marketplace suspended a seller, it then had to provide a “written statement of reasons” including “the specific facts and circumstances that led to the decision.”

Check out the revised version. It now says the written statement of reasons must do this: “Without disclosing information that would result in the disclosure of any proprietary, confidential or trade secret information, or disclosing information that would hinder any investigation or prevention of deceptive, fraudulent, or illegal activity, describe the facts and circumstances that led to the decision unless the marketplace reasonably believes that giving a written statement of reasons could negatively impact the safety or property of another user or the marketplace itself.”
In other words, Amazon doesn’t have to tell you anything. And the word “specific” was removed. So your suspension notice will be generic, just like they always have been.
Another section has Amazon’s fingerprints all over it. The past language stated that the marketplace must “identify the terms or conditions that permit the suspension or termination.” That has been changed to “identify the term, condition, or policy that serves as the basis for the suspension or termination.” This is the same-old same-old, but now California’s legislature is adopting specific language that recognizes what has always been – Amazon’s policies have the force of contract law with sellers. In other words, third-party sellers will see no improvement or relief.

What does it all mean?

Just as the changes to the BSA made very little positive difference to Amazons sellers, the California legislation will have little to no positive effects. The only question I have at this point is what it means for those sellers whose accounts are frauded. Let me explain. Sometimes, Amazon designates an account as “frauded” because Amazon suspects one of a wide range of fraudulent activities has occurred. This can be anything from money laundering to buying/selling fake gift cards to violating laws about overseas ownership, and more.

In these rare cases, the account is closed with no notice whatsoever. The seller cannot login, and there is no appeals process in place. Sometimes, we can get these accounts reinstated if the fraud designation was made in error (yes, it happens – twice in the last month at Riverbend, in fact!). But it is an extremely tough uphill battle.

Under the new legislation, Amazon would have to provided frauded California sellers with a reason for their account’s deactivation. Don’t be surprised if this provision disappears down the memory hole by the time the next version of the bill is considered

Friday, March 13, 2020

Four rules to support Amazon merchant-fulfilled (MFN) sales in tough times

 

With current Amazon FBA limitations, take shipping into your own hands

Sellers are suddenly embracing Amazon merchant-fulfilled (MFN) orders to keep their businesses afloat. Sellers need to handle the details – the right way.

Thanks to Covid-19, and a massive influx of orders for basic household goods, Amazon has taken two major actions:

  1. Limited inbound FBA shipments of non-essential items through April 5 – It’s possible that date may be extended
  2. De-prioritized fulfillment of orders of non-essential items, with some promise dates for delivery now extending into May

  We suggest that Amazon sellers who are growing their MFN capabilities, or are new to MFN, follow these strategies to best manage their ability to fulfill orders:

  1. Limit purchase quantities. Customers are buying massive quantities of some items.
  2. Extend handling time. Instead of “usually ships now” or “ships in 24 hours,” change your handle time to 2-3 or 3-5 days. 
  3. Cancel orders sooner rather than later, if you must. When stock moves quickly, you may find it necessary to cancel orders. Rather than waiting days or a week to see if you get more stock on hand, we suggest that you cancel the order right away. Buyers may be less angry and less apt to complain or leave negative feedback.
  4. Hit pause, early and often. If you see your metrics are in danger of missing targets, put your store on vacation – immediately. Get your existing orders filled and cleaned up before you change back to an active status. If you ship Seller Fulfilled Prime (SFP), remember you have the ability to set a maximum number of SFP orders per day. Use it.

Thursday, March 12, 2020

10 positive actions Amazon sellers can take during quarantine

 

Whether your 3P account is large or small, take action today

It’s a tough time for Amazon sellers – or any small business. With the situation changing every day, it’s easy to become paralyzed. Instead, how about choosing one of these 10 small ways to move your account forward – either by driving cash flow or improving listings for the future.
  1. Sling your own boxes. Amazon’s FBA operations will remain unreliable for at least a couple of months, as supply chain issues are smoothed out. This is your chance to gain a competitive advantage by merchant-fulfilling your products (MFN). The process is simpler than most FBA sellers assume it will be. Also, if you’ve got a spouse or kids home from work and school, it’s their time to contribute to the family income!
  2. Find a fulfillment house. If it’s not possible – or attractive – for you to pack and ship boxes, hire a fulfillment house. This is a great long- term strategy as well, since there are times during holidays and crises when MFN listings can out-perform FBA. If you need a referral to a high-quality fulfillment house, contact Riverbend Consulting. We can give you options.
  3. File for reimbursements. If you’re an FBA seller, Amazon likely owes you money. Inventory gets lost, damaged, not received and not returned. When these things happen, Amazon sometimes automatically reimburses your account. But more often, you must file a case to ask for the funds or inventory back. Reimbursement cases can go back 18 months, so starting now can provide a much-needed cash boost for your account. Want help? Call Riverbend Consulting. We will file your cases and only take a fee if we get you money.
  4. Source products for the future. Quarantine creates its own possible revenue streams for the future. For example, is it reasonable to expect a baby boom in 9 months? If you think it may, now is the time to find sources for baby clothing and gifts.
    Amazon sellers
  5. List your death pile. There’s an old eBay reference called the “death pile.” It’s the stuff lurking in closets and spare bedrooms that was sourced – but then never listed. eBay sellers aren’t the only ones with death piles. Amazon sellers have items lurking in warehouses and closets, too. If it’s not listed, it won’t sell! Get those items listed and merchant-fulfill them.
  6.  Clean up your catalog. Most sellers have dozens to thousands of inactive listings in their inventory. If you won’t sell these items again – delete them! If not, they become an enforcement magnet for Amazon, which sees any listing as an item you intend to sell again. This makes you susceptible to listing suspensions for restricted products, price gouging and more.
  7. Work your PPC ads. When sales are slow, the strong advertise. Refine your pay-per-click Amazon ad campaigns. Improve your targeting and keywords. If you need help, contact Riverbend Consulting! We can refer you to affordable, high-quality resources for PPC.
  8. Built out A+ content. If you own a brand, now is the ideal time to improve your listings quality. Add video and compelling text to your listings. Explain why your product is superior to the competition. Get creative and build pages that draw in customers.
  9. Re-shoot lousy listing photos. It’s a good time to revisit the images on your listing detail pages. If the primary image doesn’t follow Amazon’s standards (product only, no text, pure white background), shoot a new one. If you don’t have secondary images and “lifestyle” photos, add some. These help pop your listing higher in search results – and improve conversions.
  10. Learn to love another platform. Times are tough on all online platforms right now. But this crisis proves – yet again – that diversification is key. Some sellers are seeing impressive success on eBay. Others are pushing on Poshmark, Mercari, and other “secondary” platforms. Do the research and find another place to roll out your products.

Tuesday, March 10, 2020

Feature Engineering Techniques in Machine Learning

 

Hey all I will just explain what are all the techniques available for doing feature engineering in machine learning. In this post only explain the types of feature engineering and where it will be useful. The working guidelines and more detail of the each technique will be cover in next sub-sequence posts.

What is meant by Feature?

In a given data set, the attribute or variable is called as feature, that makes more sense in the context of a problem.If a feature has no impact on the problem, it is not part of the problem.
For example, In computer vision, an image is an observation, but a feature could be a line in the image. In natural language processing, a document or a tweet could be an observation, and a phrase or word count could be a feature. In speech recognition, an utterance could be an observation, but a feature might be a single word or phoneme.

What is feature Engineering ?

Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. It is so important to how your model performs, that even a simple model with great features can outperform a complicated algorithm with poor ones. In fact, feature engineering has been described as easily the most important factor in determining the success or failure of your predictive model. Feature engineering really boils down to the human element in machine learning. How much you understand the data, with your human intuition and creativity, can make the difference.

Why do we need it ?

They are many reasons but i have listed the common and important things below,
Improves Accuracy : Less misleading data means modeling accuracy improves 
Reduces Overfitting :  Less redundant data means less opportunity to make decisions based on noise
Reduces Training Time: fewer data points reduce algorithm complexity and algorithms train faster
Common practice: coming up with as many features as possible (e.g. > 100 not unusual) The presence of irrelevant features hurts generalization

Note : We will use only some techniques in real time. The feature engineering fully depends on the domain and data availability. Keep this in mind forever.

The Curse of Dimensionality

The required number of samples (to achieve the same accuracy) grows exponentially with the number of variables. In practice: the number of training examples is fixed.

the classifier’s performance usually will degrade for a large number of features
In many cases the information that is lost by discarding variables is made up for by a more accurate mapping/sampling in the lower-dimensional space.

The feature engineering have three components and they are listed below,
  1. Feature Construction or Generation
  2. Feature Extraction
  3. Feature Selection

 1. Feature Construction

The manual construction of new features from raw data. You need to manually create them. This requires spending a lot of time with actual sample data (not aggregates) and thinking about the underlying form of the problem, structures in the data and how best to expose them to predictive modeling algorithms. With tabular data, it often means a mixture of aggregating or combining features to create new features, and decomposing or splitting features to create new features.
With textual data, it often means devising document or context specific indicators relevant to the problem. With image data, it can often mean enormous amounts of time prescribing automatic filters to pick out relevant structures. The below are the very common ways to create the new features in machine learning. 

1.1 Indicator Variables:

The first type of feature engineering involves using indicator variables to isolate key information. Now, some of you may be wondering, “shouldn’t a good algorithm learn the key information on its own?” Well, not always. It depends on the amount of data you have and the strength of competing signals.
You can help your algorithm “focus” on what’s important by highlighting it beforehand.
Indicator variable from threshold: Let’s say you’re studying alcohol preferences by U.S. consumers and your dataset has an age feature. You can create an indicator variable for age >= 21 to distinguish subjects who were over the legal drinking age.
Indicator variable from multiple features: You’re predicting real-estate prices and you have the features n_bedrooms and n_bathrooms. If houses with 2 beds and 2 baths command a premium as rental properties, you can create an indicator variable to flag them.
Indicator variable for special events: You’re modeling weekly sales for an e-commerce site. You can create two indicator variables for the weeks of Black Friday and Christmas.
Indicator variable for groups of classes: You’re analyzing website conversions and your dataset has the categorical feature traffic_source. You could create an indicator variable for paid_traffic by flagging observations with traffic source values of  "Facebook Ads" or "Google Adwords".

1.2 Interaction Features

The next type of feature engineering involves highlighting interactions between two or more features.Have you ever heard the phrase, “the sum is greater than the parts?” Well, some features can be combined to provide more information than they would as individuals.
Specifically, look for opportunities to take the sum, difference, product, or quotient of multiple features.
Note: We don’t recommend using an automated loop to create interactions for all your features. This leads to “feature explosion.”
Sum of two features: Let’s say you wish to predict revenue based on preliminary sales data. You have the features sales_blue_pens and sales_black_pens. You could sum those features if you only care about overall sales_pens.
Difference between two features: You have the features house_built_date and house_purchase_date. You can take their difference to create the feature house_age_at_purchase.
Product of two features: You’re running a pricing test, and you have the feature price and an indicator variable conversion. You can take their product to create the feature earnings.
Quotient of two features: You have a dataset of marketing campaigns with the features n_clicks and n_impressions. You can divide clicks by impressions to create  click_through_rate, allowing you to compare across campaigns of different volume.

1.3 Feature Representation

This next type of feature engineering is simple yet impactful. It’s called feature representation.Your data won’t always come in the ideal format. You should consider if you’d gain information by representing the same feature in a different way.
Date and time features: Let’s say you have the feature purchase_datetime. It might be more useful to extract purchase_day_of_week and purchase_hour_of_day. You can also aggregate observations to create features such as purchases_over_last_30_days.
Numeric to categorical mappings: You have the feature years_in_school. You might create a new feature grade with classes such as "Elementary School", "Middle School", and "High School".
Grouping sparse classes: You have a feature with many classes that have low sample counts. You can try grouping similar classes and then grouping the remaining ones into a single "Other" class.
Creating dummy variables: Depending on your machine learning implementation, you may need to manually transform categorical features into dummy variables. You should always do this after grouping sparse classes.

1.4 External Data

An underused type of feature engineering is bringing in external data. This can lead to some of the biggest breakthroughs in performance. For example, one way quantitative hedge funds perform research is by layering together different streams of financial data.
Many machine learning problems can benefit from bringing in external data. Here are some examples:
Time series data: The nice thing about time series data is that you only need one feature, some form of date, to layer in features from another dataset.
External API’s: There are plenty of API’s that can help you create features. For example, the Microsoft Computer Vision API can return the number of faces from an image.
Geocoding: Let’s say have you street_address, city, and state. Well, you can geocode them into latitude and longitude. This will allow you to calculate features such as local demographics (e.g. median_income_within_2_miles) with the help of another dataset.
Other sources of the same data: How many ways could you track a Facebook ad campaign? You might have Facebook’s own tracking pixel, Google Analytics, and possibly another third-party software. Each source can provide information that the others don’t track. Plus, any differences between the datasets could be informative (e.g. bot traffic that one source ignores while another source keeps).

2. Feature Extraction or Dimensionality Reduction:

    Feature Extraction is a process to reduce the number of features in a dataset by creating new features from the existing ones (and then discarding the original features). These new reduced set of features should then be able to summarize most of the information contained in the original set of features. We have many feature extraction techniques and we are going to see the most used techniques, They are,
  1. Principal Components Analysis (PCA)
  2. Independent Component Analysis (ICA)
  3. Linear Discriminant Analysis (LDA)
  4. Locally Linear Embedding (LLE)
  5. t-distributed Stochastic Neighbor Embedding (t-SNE)
  6. Autoencoders

2.1 Principal Components Analysis (PCA):

PCA is one of the most used linear dimensionality reduction techniques. When using PCA, we take as input our original data and try to find a combination of the input features which can best summarize the original data distribution so that to reduce its original dimensions. PCA is able to do this by maximizing variances and minimizing the reconstruction error by looking at pairwise distances. In PCA, our original data is projected into a set of orthogonal axes and each of the axes gets ranked in order of importance.
PCA is an unsupervised learning algorithm, therefore it doesn’t care about the data labels but only about variation. This can lead in some cases to misclassification of data. While using PCA, we can also explore how much of the original data variance was preserved using the explained_variance_ratio_ Scikit-learn function. It is important to mention that principal components do not have any correlation with each other.

PCA Approach Overview:

  • Take the whole dataset consisting of d-dimensional samples ignoring the class labels
  • Compute the d-dimensional mean vector (i.e., the means for every dimension of the whole dataset)
  • Compute the scatter matrix (alternatively, the covariance matrix) of the whole data set
  • Compute eigenvectors (ee1,ee2,...,eed) and corresponding eigenvalues (λλ1,λλ2,...,λλd)
  • Sort the eigenvectors by decreasing eigenvalues and choose k eigenvectors with the largest eigenvalues to form a d×k dimensional matrix WW(where every column represents an eigenvector)
  • Use this d×k eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation: yy=WWT×xx (where xx is a d×1-dimensional vector representing one sample, and yy is the transformed k×1-dimensional sample in the new subspace.)

2.2 Independent Component Analysis (ICA):

ICA is a linear dimensionality reduction method which takes as input data a mixture of independent components and it aims to correctly identify each of them (deleting all the unnecessary noise). Two input features can be considered independent if both their linear and non-linear dependance is equal to zero.
In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that the subcomponents are non-Gaussian signals and that they are statistically independent from each other. ICA is a special case of blind source separation.
 A common example application is the "cocktail party problem" of listening in on one person's speech in a noisy room. And also it is used in medical applications such as EEG and fMRI analysis to separate useful signals from noise signal

2.3 Linear Discriminant Analysis (LDA)

LDA is supervised learning dimensionality reduction technique and Machine Learning classifier. LDA aims to maximize the distance between the mean of each class and minimize the spreading within the class itself. LDA uses therefore within classes and between classes as measures. This is a good choice because maximizing the distance between the means of each class when projecting the data in a lower-dimensional space can lead to better classification results .When using LDA, is assumed that the input data follows a Gaussian Distribution (like in this case), therefore applying LDA to not Gaussian data can possibly lead to poor classification results.

PCA vs LDA:

PCA has no concern with the class labels. In simple words, PCA summarizes the feature set without relying on the output. PCA tries to find the directions of the maximum variance in the dataset. In a large feature set, there are many features that are merely duplicate of the other features or have a high correlation with the other features. Such features are basically redundant and can be ignored. The role of PCA is to find such highly correlated or duplicate features and to come up with a new feature set where there is minimum correlation between the features or in other words feature set with maximum variance between the features. Since the variance between the features doesn't depend upon the output, therefore PCA doesn't take the output labels into account.
Unlike PCA, LDA tries to reduce dimensions of the feature set while retaining the information that discriminates output classes. LDA tries to find a decision boundary around each cluster of a class. It then projects the data points to new dimensions in a way that the clusters are as separate from each other as possible and the individual elements within a cluster are as close to the centroid of the cluster as possible. The new dimensions are ranked on the basis of their ability to maximize the distance between the clusters and minimize the distance between the data points within a cluster and their centroids. These new dimensions form the linear discriminants of the feature set.

2.4 Locally Linear Embedding (LLE)

We have considered so far methods such as PCA and LDA, which are able to perform really well in case of linear relationships between the different features, we will now move on considering how to deal with non-linear cases. Locally Linear Embedding is a dimensionality reduction technique based on Manifold Learning. A Manifold is an object of D dimensions which is embedded in a higher-dimensional space. Manifold Learning aims then to make this object representable in its original D dimensions instead of being represented in an unnecessary greater space.
Some examples of Manifold Learning algorithms are: Isomap, Locally Linear Embedding, Modified Locally Linear Embedding, Hessian Eigen Mapping and so on.

3. Feature Selection

Feature Selection is one of the core concepts in machine learning which hugely impacts the performance of your model.Feature Selection is the process where you automatically or manually select those features which contribute most to your prediction variable or output in which you are interested in.Feature selection is the process of reducing the number of input variables when developing a predictive model.   

Feature-based feature selection methods involve evaluating the relationship between each input variable and the target variable using statistics and selecting those input variables that have the strongest relationship with the target variable. These methods can be fast and effective, although the choice of statistical measures depends on the data type of both the input and output variables.
There are a lot of ways in which we can think of feature selection, but most feature selection methods can be divided into three major buckets
Filter based: We specify some metric and based on that filter features. An example of such a metric could be correlation/chi-square.
Wrapper-based: Wrapper methods consider the selection of a set of features as a search problem. Example: Recursive Feature Elimination
Embedded: Embedded methods use algorithms that have built-in feature selection methods. For instance, Lasso and RF have their own feature selection methods.

3.1 Filter Based Method 

Filter feature selection methods use statistical techniques to evaluate the relationship between each input variable and the target variable, and these scores are used as the basis to choose (filter) those input variables that will be used in the mode.
Basic idea: assign heuristic score to each feature to filter out the “obviously” useless ones.
  • Does the individual feature seems to help prediction?
  • Do we have enough data to use it reliably?
  • Many popular scores [see Yang and Pederson ’97]
Classification with categorical data
  • Chi-squared
  • information gain
  • document frequency
Regression
  • correlation
  • mutual information
They all depend on one feature at the time (and the data), Then somehow pick how many of the highest scoring features to keep Comparison of filter. Some Common methods are,
  1. information gain
  2. chi-square test
  3. fisher score
  4. Correlation Coefficient (Pearson Correlation)
  5. Variance Threshold
3.2 Wrapper Based Method

Wrapper methods evaluate multiple models using procedures that add and/or remove predictors to find the optimal combination that maximizes model performance.Sequential Forward Selection (SFS), a special case of sequential feature selection, is a greedy search algorithm that attempts to find the “optimal” feature subset by iteratively selecting features based on the classifier performance. We start with an empty feature subset and add one feature at the time in each round; this one feature is selected from the pool of all features that are not in our feature subset, and it is the feature that – when added – results in the best classifier performance.
Some common methods in wrapper is ,

  1. Recursive feature elimination
  2. Sequential feature selection algorithms
  3. Genetic algorithms
3.3 Embedded method:
   
Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm. Some common algorithms are,
  1. L1 (LASSO) regularization
  2. Decision Tree or Random Forest  
Iterative Process of Feature Engineering

Knowing where feature engineering fits into the context of the process of applied machine learning highlights that it does not standalone. It is an iterative process that interplays with data selection and model evaluation, again and again, until we run out of time on our problem. The process might look as follows:
Brainstorm features: Really get into the problem, look at a lot of data, study feature engineering on other problems and see what you can steal. 
Devise features: Depends on your problem, but you may use automatic feature extraction, manual feature construction and mixtures of the two.
Select features: Use different feature importance scorings and feature selection methods to prepare one or more “views” for your models to operate upon.
Evaluate models: Estimate model accuracy on unseen data using the chosen features.
You need a well defined problem so that you know when to stop this process and move on to trying other models, other model configurations, ensembles of models, and so on. There are gains to be had later in the pipeline once you plateau on ideas or the accuracy delta.
You need a well considered and designed test harness for objectively estimating model skill on unseen data. It will be the only measure you have of your feature engineering process, and you must trust it not to waste your time.
I will explain the each and every methods with example in upcoming posts. This post will give you the overview of those methods and its usage

6 Tools Which Will Help Improve Productivity While Working From Home

 

In light of the Coronavirus outbreak, many businesses had to shut down their offices and had to resort to work from home measures. Although this is a necessary safety measure, the businesses cannot expect to have the same level of efficiency while employees are working from their homes. This may be due to several communication problems or certain network issues which the employees might face. In such situations, the employees can make use of certain tools available online to ensure that they could perform their jobs efficiently from their homes, Some of these are:

1) Microsoft Teams – It is a useful tool which enables its users to set up Teams and chat and communicate within these teams. If users need face to face conversation, then they can jump straight into voice or video chats with other users with a single click. Microsoft Teams is integrated with Microsoft Office Services such as MS Word and MS Excel.


2) Drawpile – This is a multimedia app which allows its users to draw on a single canvas simultaneously. Multiple users can use the canvas at a single time. The drawpile is a simple little graphics design tool having a built-in server by which different users can access the canvas at the same time.


3) Firewalla – Itis a powerful firewall that connects to the router and protects our devices from cyber attacks. It safeguards our personal and business data. It is a useful device in public and home networks to ensure maximum protection from viruses and network attacks.


4) Snapchat – Bored at home? Well, Snapchat has got you covered. Snapchat is a multimedia app which allows you to easily talk with friends, view live stories and discover the latest news. You can send snaps to your friends which will only last for about 10 seconds and then disappear or send Stories which lasts for 24 hours.


5) Range Extender – WiFi’s in homes usually have a limited range. One may not get a network connection when they are in a different room away from the Wifi. When you are doing your office work from home, you need a really good network connection. To get good connectivity, we can use a Range extender. Range extender helps to keep your devices connected in every room. It provides you with a fast, reliable connection and expanded coverage in every corner of your home.


6) Work From Home Excuse Generator – Suppose you woke up a little late for work. You look at the clock and you realised that your office has started. You know that you can’t reach there on time. What do you do? Well, go to the website wth.ninja. This site provides you with a variety of excuses which you can use to present to your boss thus enabling you to Work From Home.


Thank you for reading, hope you have a fun time working from home

Looking for more? Get the complete remote work playbook here

Create an invoice storage and filing system for your Amazon ASINs

 

Keeping your Amazon business organized will make it easy to manage all projects effectively. It will also help you understand your numbers as you scale your business. But there is another reason you need proper storage and filing system to keep track all your invoices. Now more than ever, Amazon is asking sellers for invoices from suppliers.

These invoice requests can be triggered by customer issues like complaints about product condition, quality, and authenticity. Sometimes, Amazon may even ask you for invoices even before your product has made its first sale.

This happens if Amazon’s algorithms detect that the item you’re selling will likely attract a complaint in the future.

Amazon has already established itself as a dominant force in the e-commerce space. Because of that, protecting the integrity of the marketplace is their driving force. Amazon doesn’t want to be known as the marketplace where counterfeit products are sold. To nip that growing fear, they need to establish that what you sell is new, safe and sourced from a legitimate supplier.

Keeping track of your invoices will help you easily track customer complaints and act as evidence should you face account issues like suspension.

Why do you need to have an invoise storage system for filing your invoices?

Invoice storage

Amazon is constantly enforcing strict policies to ensure the legitimacy of the marketplace remains. One of the policies is asking sellers to submit invoices from suppliers. Because of that, it’s vital that you have a storage system for filing your invoices. Here is a breakdown of why an invoice storage and filing system is essential to scale your business:

Complaints of inauthentic/counterfeit items: Buyers can complain to Amazon that a product purchased was counterfeit or inauthentic. One of the cardinal sins you can commit as an Amazon FBA seller is selling counterfeit goods. Doing so can see your account suspended without the possibility of appealing the suspension. But Amazon doesn’t just suspend your account when such a complaint comes through. They will first ask to see your invoices from the supplier. You have nothing to worry about if they determine that the product is authentic.

Complaints of poor product condition: Likewise, a buyer may complain to Amazon that the product was not in the condition stated. For instance, a used product sold as new will attract a complaint. When that happens, you will have to prove through supplier invoices that the products were new.

When Amazon suspends your ASIN: Amazon can suspend your ASIN for several reasons, including selling counterfeit and inauthentic goods, condition, or expiry issues to name a few. While you can appeal such an action, you will need to provide the necessary documents for Amazon to consider reinstating your listing. Such documents can include supplier invoices in case of inauthentic or counterfeit goods.

When going through the appeal process: The appeal process for suspended ASINs is long and tiresome. However, it’s nothing you can’t beat with a little patience and preparation.

What other areas of your Amazon business will you need invoices on the ready?

Running an Amazon business requires being on top of your game. Otherwise, you might find yourself on the wrong side of Amazon policies without even realizing it. So, what other areas of your business do you need to have invoices ready?

You will need to have your supplier invoices when filing for Amazon FBA reimbursements. Surprisingly, most Amazon sellers aren’t aware they are eligible for Amazon FBA reimbursements.

Billions of items are shipped every year through Fulfillment by Amazon. With such high volumes, mistakes are bound to happen. Unfortunately, such errors in the form of discrepancies will end up costing you money unless you know what measures to take.

Discrepancies are incorrect transactions against your Amazon seller account of items lost, destroyed, damaged, or disposed of. Sometimes, Amazon may fail to receive your inbound FBA shipments translating to a discrepancy. Discrepancies might also come in the form of overcharges in Amazon fees.

You are essentially losing money if you don’t file a claim for your FBA reimbursement. An Amazon reimbursement claim is simply a case you file with Amazon, so they can pay you the amount they owe you.

Essentially, when you do business with Amazon, you pay them to take care of all inventory management. If errors arise and your items get lost or damaged during transit or in their facility, you have a right to submit a claim. When you submit a reimbursement claim, sometimes, you might have to provide invoices to get your money. This is an integral reason having an invoicing system is vital to your business. Keeping track of your reimbursements requires diligence and process but having a process in place will pay off. An invoice storage and filing system can help immensely.

How to store, file and organize your invoices

Invoices can spare you a lot of headaches when you have issues with Amazon. On top of that, invoices come in handy when the time comes to close your books or file taxes. Here are 8 steps to storing, filing and organizing your invoices for easy retrieval when required:

  1. Scan all invoices that are not already in electronic form because Amazon only accepts invoices that are in electronic form.
  2. As an Amazon seller, the chances are that you have a lot of receipts. In such a case, consider outsourcing services like Shoeboxed. Shoeboxed helps businesses track their expenses by scanning, entering, and categorizing all data in receipts in a secure and searchable account that is acceptable by even Amazon.
  3. If you have bookkeeping software, ensure its functionality allows you to store invoices.
  4. If you don’t fancy software, a nested folder on your PC or an online storage system like Dropbox will do.
  5. When you purchase items, add the invoices storage system rather than waiting until you have so many receipts or when Amazon asks for invoices.
  6. Consider using creative descriptions to help find invoices quickly when required.
  7. When providing invoices to Amazon, it’s okay to add the ASIN or highlight the individual product if it isn’t clear from the context.
  8. Finally, ensure that someone else other than you has access to your invoice storage system in case you are not there.

What are Amazon’s conditions for invoice acceptance?

If Amazon requests invoices, don’t give their investigators the opportunity to doubt the legitimacy of your suppliers or reject your invoices. Before sending the invoices, look for gaps and unclear information or have someone else do it for you.

Wrong supply chain documentation could lead to more than a listing block. If Amazon investigators detect similar problems in the past, your selling account could be at risk. Amazon has very high standards for accepting supplier invoices. So, make sure your invoices meet the following criteria:

  • The document is clear and easy to read. Amazon investigators don’t want to go through poorly scanned PDFs or blurred invoice photos you took with your phone. All invoices must look professional, so the investigator can easily find all the required information.
  • The buyer’s name on the invoice should match the name on your seller account.
  • The address on the invoice should match what you have on your Amazon seller account. If you’ve changed your billing address, then update that on your account.
  • Submit genuine invoices because Amazon doesn’t just receive your invoices, take a quick look and file them. Far from that! Amazon investigators will verify them by researching the supplier. They will send emails, make phone calls, and check for a website or valid address to determine that the supplier is legitimate.

Summary

Invoice verifications by Amazon may sound like a too scary affair you don’t want to engage in. But fear not. It’s easy to avoid mandatory verifications, and the best way to do that is to avoid situations that could require you to send invoices for verification in the first place.

But here is the truth; customer complaints are unavoidable, so at some point, you may be required to send invoices to Amazon. Just ensure that you have all invoices at hand and that they are accurate if such a time comes. You can also look at it as a way of helping you keep track of the numbers to grow your business. Finding efficiencies and processes in your business is helpful no matter how you slice it.

5 Things To Ask When Looking For an Amazon 3PL

 

What Is An Amazon 3PL?

Picture this. You have a growing Amazon business and a ton of products, but you don’t have the warehouse, infrastructure or time to manage it all.

Every day, you jump from sourcing inventory to running ads, to navigating the complicated world of eCommerce fulfillment. Time is scarce, and you find yourself directly managing and addressing minor operational concerns.

That is where an Amazon 3PL comes in. They will manage your products on Amazon from start to finish using a simple and transparent process.

 

A good Amazon 3PL can receive your products from anywhere in the world. From there, they can be sorted, inspected, and picked by specialized reps with knowledge of Amazon’s strict label and packing process.

Why Do You Want to Partner with An Amazon 3PL?

Now that you know a bit about what an Amazon 3PL is, let’s talk about how to find an Amazon 3PL partner for you and your business, and why you may want to partner with one.

There is no hiding it, logistics is complicated, especially when it comes to Amazon order fulfillment.

 

If you feel like you are spending too much time working in your business and not on your business, you should consider Amazon 3PL services. Switch to long-term thinking. Will poly bagging products yourself improve your supply chain and help your business grow?

The biggest advantage of partnering with an Amazon 3PL is the time that you get back to focus on the more critical elements of running your Amazon eCommerce business. A good 3PL will have competitive storage fees,

Amazon sellers who choose to use fulfillment services like the ones on offer from an Amazon 3PL often enjoy higher levels of customer satisfaction and a simpler eCommerce fulfillment process. This gives them the time that they need to succeed in the industry.

5 Things to Ask

Are There Any Value-Add Services You Offer?

 

In Canada alone, there are dozens of different 3PL’s and Amazon-focused prep centers to choose from. Finding the right one can be difficult.

You want to ask if they offer additional services beyond their accuracy rate and their average time to receive inventory. Look for things like:

  • Carton Forwarding
  • Kitting and Assembly
  • Cross-Docking & Amazon 3PL Services
  • Amazon Product Bundling
  • Amazon Returns
  • Custom Fulfillment Solutions
  • Handling Branded Packaging

Do You Offer Omnichannel Fulfillment?

Amazon is the world’s largest retailer, and largest ecommerce fulfillment company, but they aren’t the only company you should rely on for revenue.

The most important value ad service that an Amazon 3PL can offer you is the ability to sell omni-channel to other major marketplaces like Shopify, Walmart and eBay.

Ask yourself, “does this potential partner offer services that will let me expand outside of Amazon?”

What is your Fulfillment accuracy rate?

A trustworthy Amazon 3PL will advertise their fulfillment accuracy rate on their website. However, if you can’t easily find the number, it is worth it to ask the question to their sales or operations team.

Amazon 3PL Riverbend Consulting

You want to ensure that your 3PL partner is able to accurately label, pick, pack, and fulfill your products to Amazon. Seller central outlines stringent guidelines on how to pack and ship your goods properly to their fulfillment centers.

If your chosen 3PL cannot accommodate the rules, it can have negative impacts on your Amazon business account. On the extreme end, mistakes can result in fines, suspensions and account termination, so you want to ensure you select a reputable, transparent partner.

What Integrations Do You Have?

 

Another important question to ask is how well their software integrates with different marketplaces and inventory management systems. You want to look for an Amazon 3PL partner who can offer a unified solution that gives you the ability to sell omnichannel.

The reality is that there are thousands of different systems and softwares that a business can choose from, but the right long term partner will have a system that is efficient and flexible.

 

How Well Does Your Team Understand the FBA Preparation Process?

While many 3PL’s may offer Amazon-specific services, not all of them are made equal. Experience, knowledge, and a specialized team are all things that come if you can vet and select the right Amazon 3PL.

The act of getting your products to an Amazon fulfillment center is itself quite complicated. If your partner doesn’t understand this, you can run into unnecessary problems with compliance.

When selecting a long-term partner to help run your Amazon business account, make sure that their team has the experience that is required to help your ecommerce business grow.

Conclusion

Effectively selling products on Amazon is complex. A great seller will have to juggle marketing efforts, logistics, case management, and more – meaning that a great seller will have their hands full.

This, combined with the constant pressure from maintaining great listings can make selling on Amazon overwhelming.

If you don’t feel like you have the time to focus on the most important parts of your Amazon business, consider looking at a trusted Amazon 3PL.

Hiring the right company can offer you effective account management that will help you, freeing up time for you to focus on the more important elements of your Amazon business.

Brand Registry is not what you think it is

 

As a brand owner, selling on Amazon means protecting your brand by whatever means necessary. But this is not usually possible with few tools to help you take control of your brand and listing. When Amazon Brand Registry began, Amazon brand owners were excited. For them, Brand Registry would solve many of the problems they were experiencing. Sellers often thought Brand Registry would:

  • Help control listings
  • Make listing more attractive with headlines, bullet points, images, and A+ content
  • Help keep competitors sell counterfeit products
  • Would open doors for great-quality seller support

But as most sellers find out, Brand Registry has its limitations, and there is only so much it can do. While Brand Registry does help brand owners take control of their brands, it benefits Amazon more than the brand owners. This blog outlines why you should not be as excited about Brand Registry when selling on Amazon.

Traditional Benefits of Brand Registry Support

When Amazon rolled out the program, support was excellent. The program helped identify brand owners to Amazon and protected their intellectual property and content. Brand Registry also gave brand owners access to other marketing programs, including Amazon A+ content and Amazon Storefronts.

Because of that, Amazon’s Brand Registry had a dedicated team that brand owners could contact to report issues like property infringement, listing issues, policy violations, technical issues, and case escalation. The support team would go to great lengths to resolve problems promptly and effectively. However, that has changed over the years.

Amazon employees who launched the program either quit or rotated to other areas, and team members from Amazon Seller Support replaced them. If you have ever dealt with Seller Support, you know it is a “hit and miss” and one of the most unhelpful areas of Amazon.

While Seller Support has improved over the years as Amazon strives to create a better experience for sellers on the site, there is still a long way to go. The same issues sellers complained about years ago are still repeated daily.

Why did Amazon create Brand Registry 2.0?

Contrary to popular belief, Amazon did not create Brand Registry 2.0 to help sellers. Instead, the program helped fulfill Amazon’s legal obligations. The e-commerce giant faced lawsuits from brand owners claiming that Amazon was not doing enough to minimize intellectual property violations and counterfeit sales.

Thus, Amazon introduced Brand Registry to help deal with the many counterfeit and IP infringement issues raised by brand owners. The program also moves the burdens of regulating violations from Amazon to the brand owners. With Brand Registry in place, a brand can file an IP violation complaint against a seller. Amazon is legally obliged to act in favor of the brand by removing the listing in question due to copyright, patent, or trademark infringement.

Since Brand Registry identifies the brand owners plus their authorized sellers, Amazon can easily enforce its Standards for Brands Selling in the Amazon Store. Under the policy, brands cannot sell directly to customers through Amazon. Instead, a brand can only sell to Amazon Retailer as a vendor.

In what way does Brand Registry help Amazon more than sellers?

We have established that Amazon did not create a Brand Registry to help sellers. Instead, the platform allows Amazon to manage the work associated with receiving IP complaints, acting on them, and overseeing related appeals. They do this to avoid legal implications and not because they want to give the seller the best experience. In reality, Brand Registry does not provide actual power to brand owners over their listings; it gives “ownership” to them.

When manufacturers, resellers, and distributors sell identical products, listing price, description, and bullet points are often different. As a brand owner, you can apply for Amazon Brand Registry to have control and consistency such that customers view the products the same way across all listings.

When applying for Brand Registry, you officially own that brand if you provide proof of ownership. However, this does not mean that resellers and distributors will stop selling your product on Amazon. Brand Registry means product information across all listings will be consistent. So, while the seller owns the storefront, you control how the brand appears as a brand owner. It helps prevent competition because customers are more comfortable buying from the manufacturer or brand than third-party sellers. But here is the thing; it’s never that simple, and most of the time, it doesn’t work like that. Brand Registry does not help brand owners control who distributes their genuine products. Brand Registry also does not restrict who can and cannot sell your brand, nor does it help you manage product resellers.

Here is how:

  • Being a brand owner does not give you complete control of content since Amazon uses content distribution systems. The system makes it easy for any seller to contribute to a listing. You have product contribution rights as a brand owner, but Amazon cannot lock pages.
  • Amazon could consider a distributor as the main content contributor if they created the listing, even if you are the brand owner.
  • With hybrid accounts (Vendor and Seller Central), content ownership sometimes remains with the vendor account, which gets priority.
  • Brand Registry does not stop reselling since any seller who acquires inventory legitimately can still sell products from your brand on Amazon.

Summary

As a brand owner, you qualify for Amazon Brand Registry. Brand Registry allows you to enforce your IP right, control your product listings, and access other marketing tools. Brand Registry can help set yourself apart from other sellers and distributors selling your products on Amazon. But as stated above, Brand Registry does not cater to you, the seller. To be eligible for Amazon Brand Registry, you must have a registered trademark. Suppose you are using the IP Accelerator program; in that case, a pending trademark will do since you get faster access to Brand Registry rather than enduring the 6-month waiting period for your trademark approval. But while using Brand Registry, consider its shortcomings and take advantage of the elements that help your brand stand out against your competition.

Do you need help navigating Brand Registry? Riverbend Consulting can help! Contact us to get proper Amazon suppo

Monday, March 9, 2020

What is Amazon Fraud Seller account?

 

Fraud isn’t a term we want to hear in any business, let alone an online e-commerce business. As an Amazon seller, you should be aware of some of the issues that can arise from such a competitive online marketplace, and unfortunately, fraud is one of those instances.

Online marketplaces like Amazon offer brands the platform to sell their products globally. An Amazon seller account is a must-have in today’s cut-throat business environment, but these accounts aren’t immune from malicious practices

Structuring your online marketing strategy around your Amazon seller account could grow your profits exponentially. By using affordable digital marketing tools, you can reach untapped consumer markets.

Ultimately, your business needs an Amazon seller account. Otherwise, your brand will struggle to remain relevant in a shrinking local market as competitors leverage online retail markets to grow. But there is also another thing you need to worry about – Amazon frauding your seller account.

This article will examine what a frauded account is and how frauding happens.

What is a frauded Amazon seller account?

While applying for an Amazon seller account is straightforward, getting approved is not a cakewalk. Amazon is beyond thorough in its vetting process. The main reason for the high level of caution is that Amazon can’t risk approving fraudulent brands and entities on its platform. In fact, getting approved for an Amazon seller account is only half the battle.

As mentioned, security is a major priority for Amazon. As a result, the platform has strict policies to prevent fraud. The hard pill to swallow is that you could end up with a “frauded” Amazon seller account if Amazon believes you violated its policies.

But what is a frauded Amazon seller account? Amazon reserves the right to remove owner and sub-owner access to any account it considers a security risk. For instance, Amazon will freeze your account if it detects unauthorized access.

On the other hand, the platform will also suspend your Amazon seller account if it considers it a fraud risk. In the end, your account will be locked. As you can imagine, a fraud strike on your Amazon seller account will devastate your business.

Apart from losing access to the lucrative marketplace, you won’t be able to remove your Amazon inventory stored in Amazon FCs. What’s worse is that sellers with frauded accounts can’t access funds in their accounts. A fraud strike is the most extreme action that Amazon can take against a seller.

How does an Amazon fraud seller account occur?

There’s no way around it; you need a deep understanding of Amazon security policies to lower the chances of Amazon frauding your seller account. Knowing why a frauded Amazon seller account occurs is a good start.

#1 Security

It comes as no surprise that security tops the list. While this happens for several reasons, it boils down to account access. In short, Amazon will suspend your account if it suspects unauthorized access.

While there are cases where there are valid security concerns, sometimes the strict policy affects innocent sellers as well. For example, Amazon might lock your seller account if it detects several access attempts in a short span. In addition, Amazon might also block your seller account for sudden changes to core account holder information such as credit card details.

These policies might seem strict at a glance, but they protect the interests of Amazon and sellers on the platform. First, an unauthorized party could do irreversible damage if they access your seller account.

Second, Amazon can’t risk releasing funds to a bad actor if your seller’s bank account has been tampered with. Consequently, while a locked account might be frustrating in the short term, the action can save your business.

#2 Risk to Amazon

Amazon values nothing more than its hard-earned reputation as a trustworthy shopping platform. Because of that, the company will go to any length to maintain its reputation by keeping consumers satisfied.

In short, Amazon doesn’t joke about enforcing its customer trust policies. The platform takes severe actions against seller accounts it considers a risk to customer trust.

For instance, Amazon will fraud strike an account for gift card fraud. This occurs when a seller buys inventory on a corresponding selling account and flips them on Amazon.

Furthermore, failing to fulfill your MFN orders on a large scale will lead to your seller account being frauded. On the same note, uploading fake tracking numbers and manipulating your MFN shipping list is another reason for a fraud strike.

Amazon will also fraud strike your account if they detect illegal activities like money laundering activities, or if they detect brushing scams where a seller uses dubious techniques to boost their ratings by creating fake orders.

What should a seller do if they are “frauded”?

No doubt, getting frauded will have devastating consequences for your business. The good news is a fraud strike isn’t the end of the world. There are viable options for appealing to Amazon and reversing your fraud strike.

Option 1: Get professional help

Contacting an Amazon expert is the first thing you should do when your account gets frauded. Appealing a fraud strike on your own is a horrible idea. Not only will it take ages to make any progress, but you risk a permanent account closure if you make mistakes in your appeal.

Hiring a professional who has decades of Amazon strategic account management experience is your best bet of being reinstated. A professional knows how to maneuver the appeal system and strategically escalate whenever the process hits a stonewall.

Remember, time is of the essence. You can’t afford to make mistakes with your appeal. A professional like a full-service Amazon agency will fast-track the appeal process, reducing a fraud strike’s negative, long-term impact.

Option 2: Find out the reason for the fraud strike

Here’s the thing, you lose access to your seller central if your account gets frauded. As a result, you have no way of accessing your account data. There’s no doubt that this is a frustrating experience, especially since the data could help you find out the reason behind the fraud strike.

However, you can get around this by interviewing anyone with access to your seller account. Your findings could be the difference between a permanent ban and reinstating your seller account. Moreover, this process allows you to narrow down possible reasons for being frauded. As a result, your professional account manager can use the information you have gathered to fast-track the appeal process.

Summary

There’s no doubt that your brand can’t maximize its profits without an Amazon seller account. This proves how crucial an Amazon seller account is to the success of your business.

Not only does Amazon give you access to untapped markets, but small businesses can leverage its services to compete with the big brands. With Amazon, your business can overcome the constraints of local markets, therefore, guaranteeing exponential growth.

In short, investing in a professional Amazon strategic account manager is in your best interests. Riverbend Consulting can help keep your account safe from a fraud strike and use our expertise can be leveraged to get your account reinstated if it’s ever frauded.

Thursday, March 5, 2020

How To Run The Right Kind Of Research Study With The Double-Diamond Model

Product and design teams make a lot of decisions. Early on in the development of a product, they will be thinking about features — such as what the product should do, and how each feature should work. Later on, those decisions become more nuanced — such as ‘what should this button say? Each decision introduces an element of risk — if a bad decision is made, it will reduce the chance for the product to be successful.
The people making these decisions rely on a variety of information sources to improve the quality of their decision This includes intuition, an understanding of the market, as well as an understanding of user behavior. Of these, the most valuable source of information to put evidence behind decisions is understanding our users.
Being armed with an understanding of the appropriate user research methods can be very valuable when developing new products. This article will cover some appropriate methods and advice on when to deploy them.

A Process For Developing Successful Products

The double diamond is a model created by the UK’s Design Council which describes a process for making successful products. It describes taking time to understand a domain, picking the right problem to solve, and then exploring potential ideas in that space. This should prove that the product is solving real problems for users and that the implementation of the product works for users.

To succeed at each stage of the process requires understanding some information about your users. Some of the information we might want to understand from users when going through the process

Each stage has some user research methods that are best suited to uncovering that information. In this article, we’ll refer to the double diamond to highlight the appropriate research method throughout product development.

 

Wednesday, March 4, 2020

Building A Facial Recognition Web Application With React

If you are going to build a facial recognition web app, this article will introduce you to an easy way of integrating such. In this article, we will take a look at the Face Detection model and Predict API for our face recognition web app with React.

What Is Facial Recognition And Why Is It Important?

Facial recognition is a technology that involves classifying and recognizing human faces, mostly by mapping individual facial features and recording the unique ratio mathematically and storing the data as a face print. The face detection in your mobile camera makes use of this technology.

How Facial Recognition Technology Works

Facial recognition is an enhanced application bio-metric software that uses a deep learning algorithm to compare a live capture or digital image to the stored face print to verify individual identity. However, deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.
Facial detection is the process of identifying a human face within a scanned image; the process of extraction involves obtaining a facial region such as the eye spacing, variation, angle and ratio to determine if the object is human.

A Brief Introduction To Clarifai

In this tutorial, we will be using Clarifai, a platform for visual recognition that offers a free tier for developers. They offer a comprehensive set of tools that enable you to manage your input data, annotate inputs for training, create new models, predict and search over your data. However, there are other face recognition API that you can use, check here to see a list of them. Their documentation will help you to integrate them into your app, as they all almost use the same model and process for detecting a face.

Getting Started With Clarifai API

In this article, we are just focusing on one of the Clarifai model called Face Detection. This particular model returns probability scores on the likelihood that the image contains human faces and coordinates locations of where those faces appear with a bounding box. This model is great for anyone building an app that monitors or detects human activity. The Predict API analyzes your images or videos and tells you what’s inside of them. The API will return a list of concepts with corresponding probabilities of how likely it is that these concepts are contained within the image.
You will get to integrate all these with React as we continue with the tutorial, but now that you have briefly learned more about the Clarifai API, you can deep dive more about it here.
What we are building in this article is similar to the face detection box on a pop-up camera in a mobile phone.

You can see a rectangular box detecting a human face. This is the kind of simple app we will be building with React.

Setting Development Environment

The first step is to create a new directory for your project and start a new react project, you can give it any name of your choice. I will be using the npm package manager for this project, but you can use yarn depending on your choice.
Note: Node.js is required for this tutorial. If you don’t have it, go to the Node.js official website to download and install before continuing.
Open your terminal and create a new React project.
We are using create-react-app which is a comfortable environment for learning React and is the best way to start building a new single-pageapplication to React. It is a global package that we would install from npm. it creates a starter project that contains webpack, babel and a lot of nice features.
/* install react app globally */
npm install -g create-react-app

/* create the app in your new directory */
create-react-app face-detect

/* move into your new react directory */
cd face-detect

/* start development sever */
npm start
Let me first explain the code above. We are using npm install -g create-react-app to install the create-react-app package globally so you can use it in any of your projects. create-react-app face-detect will create the project environment for you since it’s available globally. After that, cd face-detect will move you into our project directory. npm start will start our development server. Now we are ready to start building our app.
You can open the project folder with any editor of your choice. I use visual studio code. It’s a free IDE with tons of plugins to make your life easier, and it is available for all major platforms. You can download it from the official website.
At this point, you should have the following folder structure.
FACE-DETECT TEMPLATE
├── node_modules
├── public 
├── src
├── .gitignore
├── package-lock.json
├── package.json
├── README.md
Note: React provide us with a single page React app template, let us get rid of what we won’t be needing. First, delete the logo.svg file in src folder and replace the code you have in src/app.js to look like this.
import React, { Component } from "react";
import "./App.css";
class App extends Component {
  render() {
    return (
      
      
    );
  }
}
export default App;
What we did was to clear the component by removing the logo and other unnecessary code that we will not be making use of. Now replace your src/App.css with the minimal CSS below:
.App {
  text-align: center;
}
.center {
  display: flex;
  justify-content: center;
}
We’ll be using Tachyons for this project, It is a tool that allows you to create fast-loading, highly readable, and 100% responsive interfaces with as little CSS as possible.
You can install tachyons to this project through npm:
# install tachyons into your project
npm install tachyons
After the installation has completely let us add the Tachyons into our project below at src/index.js file.
import React from "react";
import ReactDOM from "react-dom";
import "./index.css";
import App from "./App";
import * as serviceWorker from "./serviceWorker";
// add tachyons below into your project, note that its only the line of code you adding here
import "tachyons";

ReactDOM.render(<App />, document.getElementById("root"));
// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.register();
The code above isn’t different from what you had before, all we did was to add the import statement for tachyons.
So let us give our interface some styling at src/index.css file.

body {
  margin: 0;
  font-family: "Courier New", Courier, monospace;
  -webkit-font-smoothing: antialiased;
  -Moz-osx-font-smoothing: grayscale;
  background: #485563; /* fallback for old browsers */
  background: linear-gradient(
    to right,
    #29323c,
    #485563
  ); /* W3C, IE 10+/ Edge, Firefox 16+, Chrome 26+, Opera 12+, Safari 7+ */
}
button {
  cursor: pointer;
}
code {
  font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New",
    monospace;
}
In the code block above, I added a background color and a cursor pointer to our page, at this point we have our interface setup, let get to start creating our components in the next session.

Building Our React Components

In this project, we’ll have two components, we have a URL input box to fetch images for us from the internet — ImageSearchForm, we’ll also have an image component to display our image with a face detection box — FaceDetect. Let us start building our components below:
Create a new folder called Components inside the src directory. Create another two folders called ImageSearchForm and FaceDetect inside the src/Components after that open ImageSearchForm folder and create two files as follow ImageSearchForm.js and ImageSearchForm.css.
Then open FaceDetect directory and create two files as follow FaceDetect.js and FaceDetect.css.
When you are done with all these steps your folder structure should look like this below in src/Components directory:
src/Components TEMPLATE

├── src
  ├── Components 
    ├── FaceDetect
      ├── FaceDetect.css 
      ├── FaceDetect.js 
    ├── ImageSearchForm
      ├── ImageSearchForm.css 
      ├── ImageSearchForm.js
At this point, we have our Components folder structure, now let us import them into our App component. Open your src/App.js folder and make it look like what I have below.
import React, { Component } from "react";
import "./App.css";
import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
// import FaceDetect from "./components/FaceDetect/FaceDetect";

class App extends Component {
  render() {
    return (
      <div className="App">
        <ImageSearchForm />
        {/* <FaceDetect /> */}
      </div>
    );
  }
}
export default App;
In the code above, we mounted our components at lines 10 and 11, but if you notice FaceDetect is commented out because we are not working on it yet till our next section and to avoid error in the code we need to add a comment to it. We have also imported our components into our app.
To start working on our ImageSearchForm file, open the ImageSearchForm.js file and let us create our component below. This example below is our ImageSearchForm component which will contain an input form and the button.
import React from "react";
import "./ImageSearchForm.css";

// imagesearch form component

const ImageSearchForm = () => {
  return (
    <div className="ma5 to">
      <div className="center">
        <div className="form center pa4 br3 shadow-5">
          <input className="f4 pa2 w-70 center" type="text" />
          <button className="w-30 grow f4 link ph3 pv2 dib white bg-blue">
            Detect
          </button>
        </div>
      </div>
    </div>
  );
};
export default ImageSearchForm;
In the above line component, we have our input form to fetch the image from the web and a Detect button to perform face detection action. I’m using Tachyons CSS here that works like bootstrap; all you just have to call is className. You can find more details on their website.
To style our component, open the ImageSearchForm.css file. Now let’s style the components below:
.form {
  width: 700px;
  background: radial-gradient(
      circle,
      transparent 20%,
      slategray 20%,
      slategray 80%,
      transparent 80%,
      transparent
    ),
    radial-gradient(
        circle,
        transparent 20%,
        slategray 20%,
        slategray 80%,
        transparent 80%,
        transparent
      )
      50px 50px,
    linear-gradient(#a8b1bb 8px, transparent 8px) 0 -4px,
    linear-gradient(90deg, #a8b1bb 8px, transparent 8px) -4px 0;
  background-color: slategray;
  background-size: 100px 100px, 100px 100px, 50px 50px, 50px 50px;
}
The CSS style property is a CSS pattern for our form background just to give it a beautiful design. You can generate the CSS pattern of your choice here and use it to replace it with.
Open your terminal again to run your application.
/* To start development server again */
npm start
We have our ImageSearchForm component display in there.

Now we have our application running with our first components.

Image Recognition API

It’s time to create some functionalities where we enter an image URL, press Detect and an image appear with a face detection box if a face exists in the image. Before that let setup our Clarifai account to be able to integrate the API into our app.

How to Setup Clarifai Account

This API makes it possible to utilize its machine learning app or services. For this tutorial, we will be making use of the tier that’s available for free to developers with 5,000 operations per month. You can read more here and sign up, after sign in it will take you to your account dashboard click on my first application or create an application to get your API-key that we will be using in this app as we progress.


This is how your dashboard above should look. Your API key there provides you with access to Clarifai services. The arrow below the image points to a copy icon to copy your API key.
If you go to Clarifai model you will see that they use machine learning to train what is called models, they train a computer by giving it many pictures, you can also create your own model and teach it with your own images and concepts. But here we would be making use of their Face Detection model.
The Face detection model has a predict API we can make a call to (read more in the documentation here).
So let’s install the clarifai package below.
Open your terminal and run this code:
/* Install the client from npm */
npm install clarifai
When you are done installing clarifai, we need to import the package into our app with the above installation we learned earlier.
However, we need to create functionality in our input search-box to detect what the user enters. We need a state value so that our app knows what the user entered, remembers it, and updates it anytime it gets changes.
You need to have your API key from Clarifai and must have also installed clarifai through npm.
The example below shows how we import clarifai into the app and also implement our API key.
Note that (as a user) you have to fetch any clear image URL from the web and paste it in the input field; that URL will the state value of imageUrl below.
import React, { Component } from "react";
// Import Clarifai into our App
import Clarifai from "clarifai";
import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
// Uncomment FaceDetect Component
import FaceDetect from "./components/FaceDetect/FaceDetect";
import "./App.css";

// You need to add your own API key here from Clarifai.
const app = new Clarifai.App({
  apiKey: "ADD YOUR API KEY HERE",
});

class App extends Component {
  // Create the State for input and the fectch image
  constructor() {
    super();
    this.state = {
      input: "",
      imageUrl: "",
    };
  }

// setState for our input with onInputChange function
  onInputChange = (event) => {
    this.setState({ input: event.target.value });
  };

// Perform a function when submitting with onSubmit
  onSubmit = () => {
        // set imageUrl state
    this.setState({ imageUrl: this.state.input });
    app.models.predict(Clarifai.FACE_DETECT_MODEL, this.state.input).then(
      function (response) {
        // response data fetch from FACE_DETECT_MODEL 
        console.log(response);
        /* data needed from the response data from clarifai API, 
           note we are just comparing the two for better understanding 
           would to delete the above console*/ 
        console.log(
          response.outputs[0].data.regions[0].region_info.bounding_box
        );
      },
      function (err) {
        // there was an error
      }
    );
  };
  render() {
    return (
      <div className="App">
        // update your component with their state
        <ImageSearchForm
          onInputChange={this.onInputChange}
          onSubmit={this.onSubmit}
        />
        // uncomment your face detect app and update with imageUrl state
        <FaceDetect imageUrl={this.state.imageUrl} />
      </div>
    );
  }
}
export default App;
In the above code block, we imported clarifai so that we can have access to Clarifai services and also add our API key. We use state to manage the value of input and the imageUrl. We have an onSubmit function that gets called when the Detect button is clicked, and we set the state of imageUrl and also fetch image with Clarifai FACE DETECT MODEL which returns a response data or an error.
For now, we’re logging the data we get from the API to the console; we’ll use that in the future when determining the face detect model.
For now, there will be an error in your terminal because we need to update the ImageSearchForm and FaceDetect Components files.
Update the ImageSearchForm.js file with the code below:
import React from "react";
import "./ImageSearchForm.css";
// update the component with their parameter
const ImageSearchForm = ({ onInputChange, onSubmit }) => {
  return (
    <div className="ma5 mto">
      <div className="center">
        <div className="form center pa4 br3 shadow-5">
          <input
            className="f4 pa2 w-70 center"
            type="text"
            onChange={onInputChange}    // add an onChange to monitor input state
          />
          <button
            className="w-30 grow f4 link ph3 pv2 dib white bg-blue"
            onClick={onSubmit}  // add onClick function to perform task
          >
            Detect
          </button>
        </div>
      </div>
    </div>
  );
};
export default ImageSearchForm;
In the above code block, we passed onInputChange from props as a function to be called when an onChange event happens on the input field, we’re doing the same with onSubmit function we tie to the onClick event.
Now let us create our FaceDetect component that we uncommented in src/App.js above. Open FaceDetect.js file and input the code below:
In the example below, we created the FaceDetect component to pass the props imageUrl.
import React from "react";
// Pass imageUrl to FaceDetect component
const FaceDetect = ({ imageUrl }) => {
  return (
  # This div is the container that is holding our fetch image and the face detect box
    <div className="center ma">
      <div className="absolute mt2">
                        # we set our image SRC to the url of the fetch image 
        <img alt="" src={imageUrl} width="500px" heigh="auto" />
      </div>
    </div>
  );
};
export default FaceDetect;
This component will display the image we have been able to determine as a result of the response we’ll get from the API. This is why we are passing the imageUrl down to the component as props, which we then set as the src of the img tag.
Now we both have our ImageSearchForm component and FaceDetect components are working. The Clarifai FACE_DETECT_MODEL has detected the position of the face in the image with their model and provided us with data but not a box that you can check in the console.

Now our FaceDetect component is working and Clarifai Model is working while fetching an image from the URL we input in the ImageSearchForm component. However, to see the data response Clarifai provided for us to annotate our result and the section of data we would be needing from the response if you remember we made two console.log in App.js file.
So let’s open the console to see the response.

The first console.log statement which you can see above is the response data from Clarifai FACE_DETECT_MODEL made available for us if successful, while the second console.log is the data we are making use of in order to detect the face using the data.region.region_info.bounding_box. At the second console.log, bounding_box data are:
bottom_row: 0.52811456
left_col: 0.29458505
right_col: 0.6106333
top_row: 0.10079138
This might look twisted to us but let me break it down briefly. At this point the Clarifai FACE_DETECT_MODEL has detected the position of face in the image with their model and provided us with a data but not a box, it ours to do a little bit of math and calculation to display the box or anything we want to do with the data in our application. So let me explain the data above,
bottom_row: 0.52811456This indicates our face detection box start at 52% of the image height from the bottom.
left_col: 0.29458505This indicates our face detection box start at 29% of the image width from the left.
right_col: 0.6106333This indicates our face detection box start at 61% of the image width from the right.
top_row: 0.10079138This indicates our face detection box start at 10% of the image height from the top.
If you take a look at our app inter-phase above, you will see that the model is accurate to detect the bounding_box from the face in the image. However, it left us to write a function to create the box including styling that will display a box from earlier information on what we are building based on their response data provided for us from the API. So let’s implement that in the next section.

Creating A Face Detection Box

This is the final section of our web app where we get our facial recognition to work fully by calculating the face location of any image fetch from the web with Clarifai FACE_DETECT_MODEL and then display a facial box. Let open our src/App.js file and include the code below:
In the example below, we created a calculateFaceLocation function with a little bit of math with the response data from Clarifai and then calculate the coordinate of the face to the image width and height so that we can give it a style to display a face box.
import React, { Component } from "react";
import Clarifai from "clarifai";
import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
import FaceDetect from "./components/FaceDetect/FaceDetect";
import "./App.css";

// You need to add your own API key here from Clarifai.
const app = new Clarifai.App({
  apiKey: "ADD YOUR API KEY HERE",
});

class App extends Component {
  constructor() {
    super();
    this.state = {
      input: "",
      imageUrl: "",
      box: {},  # a new object state that hold the bounding_box value
    };
  }

  // this function calculate the facedetect location in the image
  calculateFaceLocation = (data) => {
    const clarifaiFace =
      data.outputs[0].data.regions[0].region_info.bounding_box;
    const image = document.getElementById("inputimage");
    const width = Number(image.width);
    const height = Number(image.height);
    return {
      leftCol: clarifaiFace.left_col * width,
      topRow: clarifaiFace.top_row * height,
      rightCol: width - clarifaiFace.right_col * width,
      bottomRow: height - clarifaiFace.bottom_row * height,
    };
  };

  /* this function display the face-detect box base on the state values */
  displayFaceBox = (box) => {
    this.setState({ box: box });
  };

  onInputChange = (event) => {
    this.setState({ input: event.target.value });
  };

  onSubmit = () => {
    this.setState({ imageUrl: this.state.input });
    app.models
      .predict(Clarifai.FACE_DETECT_MODEL, this.state.input)
      .then((response) =>
        # calculateFaceLocation function pass to displaybox as is parameter
        this.displayFaceBox(this.calculateFaceLocation(response))
      )
      // if error exist console.log error
      .catch((err) => console.log(err));
  };

  render() {
    return (
      <div className="App">
        <ImageSearchForm
          onInputChange={this.onInputChange}
          onSubmit={this.onSubmit}
        />
        // box state pass to facedetect component
        <FaceDetect box={this.state.box} imageUrl={this.state.imageUrl} />
      </div>
    );
  }
}
export default App;
The first thing we did here was to create another state value called box which is an empty object that contains the response values that we received. The next thing we did was to create a function called calculateFaceLocation which will receive the response we get from the API when we call it in the onSubmit method. Inside the calculateFaceLocation method, we assign image to the element object we get from calling document.getElementById("inputimage") which we use to perform some calculation.
leftColclarifaiFace.left_col is the % of the width multiply with the width of the image then we would get the actual width of the image and where the left_col should be.
topRowclarifaiFace.top_row is the % of the height multiply with the height of the image then we would get the actual height of the image and where the top_row should be.
rightColThis subtracts the width from (clarifaiFace.right_col width) to know where the right_Col should be.
bottomRowThis subtract the height from (clarifaiFace.right_col height) to know where the bottom_Row should be.
In the displayFaceBox method, we update the state of box value to the data we get from calling calculateFaceLocation.
We need to update our FaceDetect component, to do that open FaceDetect.js file and add the following update to it.
import React from "react";
// add css to style the facebox
import "./FaceDetect.css";
// pass the box state to the component

const FaceDetect = ({ imageUrl, box }) => {
  return (
    <div className="center ma">
      <div className="absolute mt2">
            /* insert an id to be able to manipulate the image in the DOM */
        <img id="inputimage" alt="" src={imageUrl} width="500px" heigh="auto" />
       //this is the div displaying the faceDetect box base on the bounding box value 
      <div
          className="bounding-box"
          // styling that makes the box visible base on the return value
          style={{
            top: box.topRow,
            right: box.rightCol,
            bottom: box.bottomRow,
            left: box.leftCol,
          }}
        ></div>
      </div>
    </div>
  );
};
export default FaceDetect;
In order to show the box around the face, we pass down box object from the parent component into the FaceDetect component which we can then use to style the img tag.
We imported a CSS we have not yet created, open FaceDetect.css and add the following style:
.bounding-box {
  position: absolute;
  box-shadow: 0 0 0 3px #fff inset;
  display: flex;
  flex-wrap: wrap;
  justify-content: center;
  cursor: pointer;
}
Note the style and our final output below, you could see we set our box-shadow color to be white and display flex.
At this point, your final output should look like this below. In the output below, we now have our face detection working with a face box to display and a border style color of white.

Conclusion

I hope you enjoyed working through this tutorial. We’ve learned how to build a face recognition app that can be integrated into our future project with more functionality, you also learn how to use an amazing machine learning API with react. You can always read more on Clarifai API from the references below. If you have any questions, you can leave them in the comments section and I’ll be happy to answer every single one and work you through any issues.

Resources And Further Reading







src/App.js
src/index.css
src/App.js
ImageSearchForm.js