What is Pinned Archive Token (PAT)? This article will introduce you to this "so cool" technology.
In the world of internet and communication technology, we always seek and look for a way to make data transmission lighter, faster, and sustainable. Data transmission or data communication is an important and integral part in the technology. It is the transfer of data (a digital bit stream or a digitized analog signal) over a point-to-point or point-to-multipoint communication channel.
Over the years, technology experts are racing to invent a better way to transmit data from a source to end users. The necessity of light-speed yet accurate data transmission has now become a major talk in any technology conventions.
Nowadays, the concept and model of data transmission is beginning to shift. From server-client based into collective shared data. What this means is that data is not treated like a waterfall anymore, but more similar to a lake where values are now served as a buffet when anyone can jump in and get it. No one seemed to mind, as long as they can still have the privacy in zipping the data.
If exchanging and sharing data has now become a new landscape, then how to make it even better? This question is what Pinned Archive Tokens (PAT) is based on. How can we make something that's shared to serve better for the greater good?
Although sharing data is a wonderful concept, and cloud technology is probably a technology revolution. The data that is received to end users is still not shared completely, it still needs to go through HTTP which a server-client based portal. There are so many studies that say that HTTP is obsolete and need to be replaced. But, the infrastructure and collateral damage that may be caused by the migration would make major industries think twice. But, people are growing both in appetite for stability and thirst for a more advanced system. So what can we do about that?
How to use HTTP to serve as a pond or lake instead of a door? I say don't change the door but change the form of the one which is passing by it. Instead of having the data passing by the door in a flow, pass the data as a package and stuff it through the gate in a chunk. Let the portal thinks it is a package instead of stream.
Now, how to compile and pack data that is scattered in the cloud then group them as packages? Do we need a compressor in the cloud that running as a bot? We actually don't need to do that. Why? Because data in the cloud are already in chunks because user's activities have grouped them by sending metadata back and forth. We just need to pin them to mark which chunk do we need.
Awesome! People activities unconsciously have done most of the work.
The basic algorithm.
So how do we make a calculation to mark the data in cloud engines and servers?
Imagine this, if A is a user and B is user and they search for different items with different categories, different terms, even different languages, how can we group what A is looking for and what B is looking for? It seems impossible but actually, we forget one thing. That cloud is still an engine and engine is using engine's language to process a transaction. So if A is looking for a shoe and B is looking for a skirt, then we already can translate and decode "s", "h", "o", "e", "k", "i", "r", "t". Imagine if there are millions of users doing the same thing at the same time. We already get enough information on whatever we are looking for without the hassle to translate and recode everything.
That is the basic algorithm of PAT, instead of decrypting the language and recode it, we use the data as it is and make a token for every similarity in the difference to identify the requester or accessor. The calculation is as follow:
X = Z - ((A1 + B1 + C1) / Y * (A1 + B1 + C1)) if X <> 1 then skip if X == 1 then A = 1 if X <= 1 then B = 1 if X >= 1 then C = 1 and so on... for a simple term like jacket the token is xc1fgtw97785000000911 and PAT push all the data like names, details, images, etc into 1 token.
with now the data is in a form of a token and pinned, to get it pass HTTP the data itself is 90% lighter than the original data. Because now the data is being transmitted as a singular string in the eye of HTTP (portal).
Why I implemented the technology in Dropshix?
Drop shipping business is a business that needs fast data and top notched data accuracy. A tool that will be used by thousands of users that need to import thousands of data will need a lightweight engine but able to process complex data such images, links, floats, and other details. Anyway, I wasn't sure that this will work. At least not until I put an update to its basic algorithm which I hope I can share soon (I won't share if no one is interested, I'm making money from it anyway and around 1,000 dropshix users).
I also think that drop shipping business should benefit 3 parties; "drop shipper", "supplier store", and "manufacturer". With data being transferred as a token, you all must be thinking how that can benefit all of that party. Well, the answer is when the data is being exploded in the end user. The HTTP protocol would consider it as metadata and cache, which is the fuel and food for any crawler bot. It shows a valid and organic viewer in massive amount (because it's faster, the data number per minutes is multiplied). Everyone is happy.
So if you're a dropshix user and realize that since you're using dropshix you are getting much more engagement in your campaign, there is the reason. With a massive fresh and organic data being processed by crawler bots, it pushes the supplier store performance in the term of traffic. And since the token is actually generated with similarity in data difference, crawlers will always consider to put the data with similarity more prioritized than non-similar data. Which by the end makes your listings in WooCommerce lifted (as long as the source data is live on the internet).
Dropshix boost AliExpress?
Yes, it is. Because it's faster but still scanned as a normal browsing activity by the protocol, it will increase the traffic number for all products in AliExpress. It also boosts the page speed performance record, making no "out of time" error for any product page.
So Dropshix is not just tapping the data and uses it for its users but also push back data as a visitor record. Instead, it gives additional values to AliExpress stores because the more dropshix users import their products the more similar data are being collected by crawler engine. In the end, it will make a trend and a lot of other trends is in the making during the process.
To be continued.
So this is the end of part #1, the article itself is far from perfect and will need fixes here and there. But I hope this article will be able to give you a clearer understanding in Dropshix from the technology point of view. At the end, I hope you come to believe that you're in the good hand.
Please show your interest in this article by leaving comments and feedbacks, I would love to hear any kind of good feedback. Just please don't spam or making other readers feeling uncomfortable. I love to discuss on technology, so if you grow your interest I will be ready to talk on Telegram or Skype.
Thank you for reading, and see you in the part #2. Hopefully.