VirtueMart

Export shows product without tax

| VirtueMart

Sometimes an export looks like it has no tax. This could actually be correct.

 

What is happening?

When a file is uploaded and there is no product tax included in the upload, the product gets a product_tax_id of 0. After checking the product in VirtueMart, it does show a tax rate.

However, the database still says 0. After saving the product, the database will still be 0. This happens because the tax rate has not been changed in anyway, so VirtueMart leaves it as it is. There are 2 ways to make this work:

  1. Change the tax on the product page to another value and then change it back. VirtueMart will then save the new tax value.
  2. Import a tax rate or tax id via CSV Improved

Now CSV Improved can export a product price with tax.

Read more ...

No product SKU found

| VirtueMart

The message "No product SKU found" is a very common one and the cause is almost always the same. Delimiters.

When importing a file, usually a CSV file in this case, the data is separated by delimiters. For CSV Improved to understand the file that is being imported it needs to be told what delimiters the file is using. Let me show you with an example.

 

The delimiter setting in CSV Improved:

CSV Improved field delimiters

 

The file:

~product_sku~^~product_name~^~category_path~
~Wb1~^~Product1~^~Home/New Products~
~Wb2~^~Product2~^~Home/Products~
~Wb3~^~Product3~^~Home/Products~

 

The ^ separates the fields and the ~ encloses the text. Now CSV Improved knows how to read the file. Make sure both delimiters always match.

Read more ...

Debugging Google Base Export

| VirtueMart

Many people have reported issues with the Google Base upload after exporting products using CSV Improved. In most cases it is not CSV Improved exporting a bad file but the file containing characters Google Base does not like. These characters are often not even easily visible but they can be discovered using the right tools.

Procedure

This guide is based on a Linux operating system, same or similar tools might be available on Windows but no guarantees.

Manual mode: Locating the offending character

The first step is to check if your XML file is well-formed. This means it only checks if there are no open tags as a bonus we also get a message if it finds offending characters. The same characters Google doesn't like.

To do this, log into your system and go to where your XML file is. Once there, execute the following command:

xmlwf myfile.xml

In case the file is perfectly fine, you will receive no message. This means your file is ready to upload to Google Base. If there is a problem with the file you will receive a message that includes both the line number and the character position of the offending character. This is important information because now it is clear where the problem is. The message looks like this:

myfile.xml:7:291: not well-formed (invalid token)

In this example the offending character is on line 7 and position 291.

Semi-automatic: Locating the offending character

For this task I have written an analyze script in bash and PHP. What the code does is look for any errors the xmlwf generates and analyze the XML file for the error. The offending character is then put into a logfile. Here is the bash script:

#!/bin/sh
# Set some variables
FOLDER='/home/mysite/jobs/'
FILE='myfile.xml'
LOGFILE='filecheck.log'
ANALYZEFILE='analyze.log'
ANALYZESCRIPT='analyze.php'
 
# Lets go to the appropiate folder
cd $FOLDER
 
# Check if a new file has been created
if [ -e $FILE ]; then {
# Check if the file is well formed
`/usr/bin/xmlwf $FOLDER/$FILE | grep -q 'not well-formed'`
 
if [ $? -eq 0 ] ; then {
echo "XML not valid" >> $FOLDER/$LOGFILE
message=`/usr/bin/xmlwf $FOLDER/$FILE`
echo $message >> $FOLDER/$LOGFILE
 
# Analyze the error
TEXT=${message%%:*}; rest=${message#*:}
# Get the line number
LINEPOS=${rest%%:*}; rest=${rest#*:}
# Get the position where the error is
CHARPOS=${rest%%:*};
 
# Put the offending line in a file to be analyzed
`sed $LINEPOS'q;d' $FOLDER/$FILE > $ANALYZEFILE`
 
# Analyze the file
`/usr/bin/php -f $FOLDER/$ANALYZESCRIPT analyzefile="$ANALYZEFILE" charpos="$CHARPOS" >> $FOLDER/$LOGFILE`
}
fi
}
fi
 
# Add a linebreak
echo "" >> $FOLDER/$LOGFILE

Here is the PHP script:

/* Get the variables */
list(,$analyzefile) = explode("=", $argv[1]);
list(,$charpos) = explode("=", $argv[2]);
list(,$all) = explode("=", $argv[3]);
 
/* Read the file to be analyzed */
$content = file_get_contents($analyzefile);
 
if (!$all) {
/* Output the offending character */
echo "Position: ".$charpos." ";
echo "Character: ".substr($content, $charpos, 1);
echo " ";
echo "Ordinal: ".ord(substr($content, $charpos, 1));
echo "\n";
}
else {
$counter = 1;
while (strlen($content) > 0) {
echo "Position: ".$counter." ";
$counter++;
echo "Character: ".substr($content, 0, 1);
echo " ";
echo "Ordinal: ".ord(substr($content, 0, 1));
echo "\n";
$content = substr($content, 1, strlen($content));
}
}
?>

Recommended is to save the bash script to a file called checkfile.sh and the PHP script to a file named analyze.php.

Using the scripts

Save both files in the same location and copy the offending XML file to the location to. The bash script has a few settings:

# Set some variables
FOLDER='/home/mysite/jobs/'
FILE='myfile.xml'
LOGFILE='filecheck.log'
ANALYZEFILE='analyze.log'
ANALYZESCRIPT='analyze.php'

Adjust the variables as needed. The logfiles will be created if they do not exist.

FOLDER = location where the scripts and XML file are
FILE = the name of the XML file to check
LOGFILE = file where the result is saved
ANALYZEFILE = temporary file used to store the offending line
ANALYZESCRIPT = PHP script that analyzes the ANALYZEFILE

To run the script execute the following command:

sh +x checkfile.sh

The result can be found in the file called filecheck.log. This file can be viewed with the following command:

vi filecheck.log

The output can look like this:

XML not valid
/home/mysite/jobs/myfile.xml:7:291: not well-formed (invalid token)
Position: 291 Character:  Ordinal: 160

Now it is easy to find this character in the offending XML file by opening the XML file like this:

vi myfile.xml

This opens the VI editor and show the myfile.xml file. Jump to the problem line 7 with this command:

:7

Press enter after that. Now it is time to look for the strange character that was shown in the filecheck.log.

Alternatively the analyze.log file can be opened which contains only the single line with the offending character, like this:

vi analyze.log

Now it might be possible to remove the problem character here from the XML file, it is better to remove it from the source (database/VirtueMart/spreadsheet), otherwise a newly created XML file will still contain this problem. The advantage now is, you know where to look for the file.

Read more ...

How to import products?

| VirtueMart

To import products successfully, it all depends on how the template and the file to be imported is setup. There are a few requirements that apply:

  1. Use only CSVI supported fieldnames
  2. Use correct delimiters when importing CSV files or make sure there is no empty column data when importing XLS files
  3. Only import file sizes your system can handle
  4. Format your data correctly

Available fields

CSV Improved can only import fields that are listed in the available fields list. It is not possible to use your own fields. The only exception here is custom database fields in the product table. This is a new feature as of the 0.9 version.

Correct layout

For CSV files it is important to have the correct delimiters. The template and the file to import must have the same delimiters. For example, the template has the delimiters ” and , while the file to import has ^ and ~, the import will fail.

For XLS files, spreadsheet programs do not always clear empty fields. For example, the columns A till D are filled and in column E there is some test data. Before importing the data in column E is deleted. This looks deleted but the fields are actually filled with an empty value. This will cause the import to fail because it reads the empty values. Make sure all data is really deleted, by deleting the complete column instead of just the fields.

Size does matter

The size of the file to import is very important. How big the file should be to upload depends on the server settings. Most important is the amount of memory available and the maximum execution time. The larger the file to import, the more memory and time is needed to process it.

CSV files use a lot less memory than XLS files. This is because the CSV file can be read one line at a time, the XLS file needs to be read in full on import. As the import process collects data on the import, memory is used more. This is usually about the size of your import file. Unfortunately, there is no exact science as to calculate the amount of memory used.

Format your data

All data should be formatted correctly. The available fields page gives information per field on how the data should look like. If data is not formatted correctly, the import will still happen only the data will not show up as expected.

Example CSV

An example of how a CSV file could look like: "product_sku","product_name","category_path","product_desc","product_price","product_full_image","product_thumb_image"
"PIE1","Apple pie","Pies/Apple","A delicious apple pie","9.99","appliepie.jpg","resized/applepie.jpg"

Read more ...