Very Un-Disney Restaurant Policies

lazyboy97o

Well-Known Member
That actually isn't what you said. You said it isn't able to be contested.

And something on a room charge isn't a "free loan" as you haven't paid for it yet.

And if we are getting technical - you are taking a "free loan" from disney for everything you charge to your room and have yet to pay for on your credit card.

Things aren't perfect always - there are times in my real world where an order is wrong, a store over charges, a pizza delivery wasn't right - it is about how it is handled ON BOTH SIDES that is what matters.

I am really glad you have never made a mistake, i have and it sucks - it makes me have a little compassion and understanding when someone is trying to help me and make it right - or point me to the person that can.
Again, I did not. “Hinder” means “to create difficulties for (someone or something), resulting in delay.” It does not mean all out prevention.

When I make a mistake I take ownership of resolving the issue and bridging anything that needs to be handled. I do not demand that the one negatively impacted by my mistake engage in extra work and cost to resolve the matter. It is my mistake to fix, not theirs. If a cost is involved it is mine to eat, not theirs. In these instances it is the “guest” who did not make the mistake who must cover the cost and take the time to ensure the matter is actually resolved.
 

imsosarah

Well-Known Member
Just to inform you, I was responsible for a 24x7x365 SaaS/ E-Commerce site, very similar to MDE. In fact, Disney was within 24 hours of signing a contract with my company to provide MDE.

In addition, the system upgraded the software with zero downtime and no glitches. So, I have high standards for software.

Fyi
What year did you run a 24x7x365 global ecommerce technology with millions of hits a day/hour? If recently - please let me know what software did you use that had zero downtime or zero glitches. I have companies with billions of dollars to invest that would kill for a perfect system.

A 100% downtime is unreal and a misnomer.

Even with a 99.99% when you factor out by volume there will be some error.
  • Roughly 4 minutes a month to be exact.
  • At that time frame, it is possible it glitches in 20 second increments
  • Meaning there are roughly 12 different points that a large number of transactions can be effected.
  • With ONLY 1 million visitors a month that is 463 transactions PER GLITCH or 5,555 a month.
  • Disney has more transactions than that in a day - let alone a month.

I have worked with the biggest saas software companies in the world - the only ones that claim a 100% uptime and 100% no errors are ones that are FOS, not handling volumes at all or not doing true dynamic with multiple API integrations they are pulling from.

24/7/365 ecommerce doesn't matter - its the volume that does and anytime there is an upgrade there is change.
 

Jon81uk

Well-Known Member
You also need to be aware that many of the eating locations in Epcot and Disney Springs are not disney owned rather than partners and as someone (who works in more recent technology) there are times where API issues between systems can take place due to all sorts of things - even if the best systems.

Yep some of the restaurants being third-party mean they don't always have the tools to resolve everything. In some ways you could think of it being like having a groupon voucher, if the voucher fails for some reason, the restaurant may not be best placed to sort it and you have to speak to Groupon. Similarly if you have an issue with dining plan credits, the resort is usually better placed to help than the restaurant.
 

Hockey89

Well-Known Member
What year did you run a 24x7x365 global ecommerce technology with millions of hits a day/hour? If recently - please let me know what software did you use that had zero downtime or zero glitches. I have companies with billions of dollars to invest that would kill for a perfect system.

A 100% downtime is unreal and a misnomer.

Even with a 99.99% when you factor out by volume there will be some error.
  • Roughly 4 minutes a month to be exact.
  • At that time frame, it is possible it glitches in 20 second increments
  • Meaning there are roughly 12 different points that a large number of transactions can be effected.
  • With ONLY 1 million visitors a month that is 463 transactions PER GLITCH or 5,555 a month.
  • Disney has more transactions than that in a day - let alone a month.

I have worked with the biggest saas software companies in the world - the only ones that claim a 100% uptime and 100% no errors are ones that are FOS, not handling volumes at all or not doing true dynamic with multiple API integrations they are pulling from.

24/7/365 ecommerce doesn't matter - its the volume that does and anytime there is an upgrade there is change.
Call the fight. Lol
 

imsosarah

Well-Known Member
Again, I did not. “Hinder” means “to create difficulties for (someone or something), resulting in delay.” It does not mean all out prevention.

When I make a mistake I take ownership of resolving the issue and bridging anything that needs to be handled. I do not demand that the one negatively impacted by my mistake engage in extra work and cost to resolve the matter. It is my mistake to fix, not theirs. If a cost is involved it is mine to eat, not theirs. In these instances it is the “guest” who did not make the mistake who must cover the cost and take the time to ensure the matter is actually resolved.

Well, I am sure that computer would have taken accountability if it were a human. Unfortunately, it is not. And in this case, the human that was not at fault took accountability in said situation and was trying to correct it - in his effort to correct it, it seems he went to a number of actions before coming to the realization a third party support team would need to be brought it because since he was not responsible for the issue, he could not resolve it correctly. He was however HONEST that it may take longer than the guests wanted to wait and gave them an option to resolve it later.

At no point was this guest ever actually put out in any way or going to be expected to pay for this meal. Charging it to the room then making a quick call to the guest relations when they returned to the hotel to have it corrected/removed would have taken moments.

It was always a case of lets charge this to the room (which the gratuity likely would have been done anyway) and we will make sure it gets taken care of - you won't be responsible for it. Not you are paying this like it or not.

The OP likely spent more time arguing about how to handle it than it would have been to just call GR.

And quite honestly, there is no point in arguing with people who lack situational understanding.
 

Djsfantasi

Well-Known Member
Original Poster
What year did you run a 24x7x365 global ecommerce technology with millions of hits a day/hour? If recently - please let me know what software did you use that had zero downtime or zero glitches. I have companies with billions of dollars to invest that would kill for a perfect system.

A 100% downtime is unreal and a misnomer.

Even with a 99.99% when you factor out by volume there will be some error.
  • Roughly 4 minutes a month to be exact.
  • At that time frame, it is possible it glitches in 20 second increments
  • Meaning there are roughly 12 different points that a large number of transactions can be effected.
  • With ONLY 1 million visitors a month that is 463 transactions PER GLITCH or 5,555 a month.
  • Disney has more transactions than that in a day - let alone a month.

I have worked with the biggest saas software companies in the world - the only ones that claim a 100% uptime and 100% no errors are ones that are FOS, not handling volumes at all or not doing true dynamic with multiple API integrations they are pulling from.

24/7/365 ecommerce doesn't matter - its the volume that does and anytime there is an upgrade there is change.

This was circa 2015.

Massively parallel Windows in virtual machines running java applications with a Linux backend running parallel Oracle RAC systems. At the network level, there were dual firewalls in a failover configuration as well as dual F5 load balancers also in a failover configuration.

Besides fault tolerant software, you need a fault tolerant infrastructure. Plus, an extensive QA process. Before a single line of code went live, it has gone through four levels of testing, both by manual test plans and automated regression testing.

It can be done, but you have to commit to the software and hardware infrastructure and a rigorous approach to testing

Expensive up front, but the payback is short. You get what you pay for.
 

imsosarah

Well-Known Member
Call the fight. Lol

People need to complain all the time.

And it is often those DEMANDING the accountablity of others the most that fail to take accountabilty for their own role in
This was circa 2015.

Massively parallel Windows in virtual machines running java applications with a Linux backend running parallel Oracle RAC systems. At the network level, there were dual firewalls in a failover configuration as well as dual F5 load balancers also in a failover configuration.

Besides fault tolerant software, you need a fault tolerant infrastructure. Plus, an extensive QA process. Before a single line of code went live, it has gone through four levels of testing, both by manual test plans and automated regression testing.

It can be done, but you have to commit to the software and hardware infrastructure and a rigorous approach to testing

Expensive up front, but the payback is short. You get what you pay for.


First, you don't clarify your load/transactions levels which has major impact.

Second, you don't clarify if you were in an on-prem version of the RAC (which in 2015 is likely), a private/closed cloud or a standard cloud deployment.

Third, there is no mention of you having to run multiple API's (again, in 2015 was far more rare) vs full blown integrations with other systems that took years to roll out and greatly limited your reach.

In any case - It is possible you weren't on a SaaS platform and using API configurations to pull together multiple systems across multiple companies as is standard protocol for a large global organization and in systems like those being used in 2020 vs 2015.

In this specific case - we are isolating this down further and upset for a partner where the failure was likely an API issue combining the core solution with an API feed from the local POS at the San Angel Inn (privately owned and not disney)

Specific to the Oracle RAC system the uptime is often based on a maximum tolerable length of a single unplanned outage . "If the event lasts less than 30 seconds, then it may cause very little impact and may be barely perceptible to users. As the length of the outage grows, the effect may grow exponentially and negatively affect the business."

Basically it is 100% at over 30 seconds. For 99.999% of companies that is fine. As it likely was for yours.

That isn't a knock it is just a fact. And assuming that a company that ever faces any issues with a upgrades or product enhancements release is being cheap and effectively not doing their testing is something that would make sense when running a smaller bredth of a product. A SaaS solution today - like and ERP or even an HCM solution may have an infinate number of transactions that could be a result of a single change and there are protocols (even the strictest) of how that software testing gets done. Via tech or human, there may always be a one of situation where it it can happen

But again, the issue here is behavior. And entitlement that perfection is the only baseline and no one has a room to fix and error unless it is EXACTLY what someone wants NOW which in any technology isnt' always possible.
 

RobWDW1971

Well-Known Member
People need to complain all the time.

And it is often those DEMANDING the accountablity of others the most that fail to take accountabilty for their own role in



First, you don't clarify your load/transactions levels which has major impact.

Second, you don't clarify if you were in an on-prem version of the RAC (which in 2015 is likely), a private/closed cloud or a standard cloud deployment.

Third, there is no mention of you having to run multiple API's (again, in 2015 was far more rare) vs full blown integrations with other systems that took years to roll out and greatly limited your reach.

In any case - It is possible you weren't on a SaaS platform and using API configurations to pull together multiple systems across multiple companies as is standard protocol for a large global organization and in systems like those being used in 2020 vs 2015.

In this specific case - we are isolating this down further and upset for a partner where the failure was likely an API issue combining the core solution with an API feed from the local POS at the San Angel Inn (privately owned and not disney)

Specific to the Oracle RAC system the uptime is often based on a maximum tolerable length of a single unplanned outage . "If the event lasts less than 30 seconds, then it may cause very little impact and may be barely perceptible to users. As the length of the outage grows, the effect may grow exponentially and negatively affect the business."

Basically it is 100% at over 30 seconds. For 99.999% of companies that is fine. As it likely was for yours.

That isn't a knock it is just a fact. And assuming that a company that ever faces any issues with a upgrades or product enhancements release is being cheap and effectively not doing their testing is something that would make sense when running a smaller bredth of a product. A SaaS solution today - like and ERP or even an HCM solution may have an infinate number of transactions that could be a result of a single change and there are protocols (even the strictest) of how that software testing gets done. Via tech or human, there may always be a one of situation where it it can happen

But again, the issue here is behavior. And entitlement that perfection is the only baseline and no one has a room to fix and error unless it is EXACTLY what someone wants NOW which in any technology isnt' always possible.
Is this what IT discussion boards are like? Who knew they were so testy?! #saassmack
 

TomDisney

Active Member
Also remember this with regards to the Disney IT department. The IT department worked hard on developing the system that is in place now. Just as it was being deployed, these same people were told that they were being laid off and replaced with contractors from India here on H-1B visas and located in India. This was back in 2015. I don't know how many of these jobs may have come back since that time, but having worked in IT myself for over 30 years, I have seen many times that companies outsource to cheaper consulting companies and they get what they pay for and they end up spending even more money to hire more people to fix the issues.

 

lazyboy97o

Well-Known Member
Well, I am sure that computer would have taken accountability if it were a human. Unfortunately, it is not. And in this case, the human that was not at fault took accountability in said situation and was trying to correct it - in his effort to correct it, it seems he went to a number of actions before coming to the realization a third party support team would need to be brought it because since he was not responsible for the issue, he could not resolve it correctly. He was however HONEST that it may take longer than the guests wanted to wait and gave them an option to resolve it later.

At no point was this guest ever actually put out in any way or going to be expected to pay for this meal. Charging it to the room then making a quick call to the guest relations when they returned to the hotel to have it corrected/removed would have taken moments.

It was always a case of lets charge this to the room (which the gratuity likely would have been done anyway) and we will make sure it gets taken care of - you won't be responsible for it. Not you are paying this like it or not.

The OP likely spent more time arguing about how to handle it than it would have been to just call GR.

And quite honestly, there is no point in arguing with people who lack situational understanding.
It is Disney’s system so they should own the errors. This was not some one off issue that had not been seen before but is a somewhat regular occurrence. Disney should have a process in place in that requires no action on the part of the guest. Even with the third party operators, Disney and not the guest, should provide the guarantee on the meal because it is Disney’s system at fault.

The guest was put out by having to cover the meal and then seek its resolution. This is not a process that only takes a few minutes and can easily take hours, and that assumes you go right to a person who can resolve the issue and not tel you to seek assistance elsewhere. That is not even close to “we will make sure it gets taken care of”.
 

RustySpork

Oscar Mayer Memer
What year did you run a 24x7x365 global ecommerce technology with millions of hits a day/hour? If recently - please let me know what software did you use that had zero downtime or zero glitches. I have companies with billions of dollars to invest that would kill for a perfect system.

A 100% downtime is unreal and a misnomer.

No, it's expensive. You can achieve near 100% (5 9s or greater) with a good blue/green and geo-load balancing strategy but it comes at a cost, especially if you host it yourself. If you share the load between cloud providers it's much more cost-effective, but still expensive. This can effectively hide outages from your customers and give the appearance of 100% uptime.

If the customer doesn't know something is down when it's down, it's "not down".

Even with a 99.99% when you factor out by volume there will be some error.
  • Roughly 4 minutes a month to be exact.
  • At that time frame, it is possible it glitches in 20 second increments
  • Meaning there are roughly 12 different points that a large number of transactions can be effected.
  • With ONLY 1 million visitors a month that is 463 transactions PER GLITCH or 5,555 a month.
  • Disney has more transactions than that in a day - let alone a month.

Who designs for 4 9's anymore? That's a legacy pattern.

I have worked with the biggest saas software companies in the world - the only ones that claim a 100% uptime and 100% no errors are ones that are FOS, not handling volumes at all or not doing true dynamic with multiple API integrations they are pulling from.

24/7/365 ecommerce doesn't matter - its the volume that does and anytime there is an upgrade there is change.

Test upgrades and code changes as canary deployments, and shift your strategy to a container model. Changes (upgrades, deploys, backend outages) don't have to mean downtime. Embrace chaos engineering to help you learn where to tune and adapt your infrastructure for resiliency.
 
Last edited:

imsosarah

Well-Known Member
Who designs for 4 9's anymore? That's a legacy pattern.

Treat upgrades as canary deployments, and shift your strategy to a container model. Change (upgrades, deploys, backend outages) doesn't have to mean downtime. Embrace chaos engineering to help you learn where to tune and adapt your infrastructure for resiliency.
[/QUOTE]

TOTALLY AGREE!!! Many do still run on 99.99% unfortunately - especially if they have built it partially inhouse. For some reason, many global companies find that an acceptable number


Agreed, lets relook then...

Even with a 99.999% there will be some error and is not 100%.
  • 26.3 seconds per month of downtime
  • With ONLY 1 million visitors a month that is still more than 500 transactions a month.
  • Disney has more transactions than that in a day - let alone a month.

There is STILL an unfortunate chance that someone could be part of those ~500 transactions that are issue

It is simply about giving some grace when the manager HAD NO CONTROL to do anything differently and the fix was pretty simple, something many of us that have done multiple trips to disney have had to do - and are usually comped above and beyond for making up for any inconvenience.
 

RustySpork

Oscar Mayer Memer
Agreed, lets relook then...

With a 99.999% (which is still standard and many do still run on 99.99%) when you factor out by volume there will be some error and is not 100%.
  • 26.3 seconds per month of downtime
  • With ONLY 1 million visitors a month that is still more than 500 transactions a month.
  • Disney has more transactions than that in a day - let alone a month.

There is STILL an unfortunate chance that someone could be part of those ~500 transactions that are issue

Transactional errors may not contribute to a downtime calculation, that would depend on the wording and agreement within an SLA. I don't believe Disney has made any uptime commitments to the public.
 

RustySpork

Oscar Mayer Memer
It is Disney’s system so they should own the errors. This was not some one off issue that had not been seen before but is a somewhat regular occurrence. Disney should have a process in place in that requires no action on the part of the guest. Even with the third party operators, Disney and not the guest, should provide the guarantee on the meal because it is Disney’s system at fault.

The guest was put out by having to cover the meal and then seek its resolution. This is not a process that only takes a few minutes and can easily take hours, and that assumes you go right to a person who can resolve the issue and not tel you to seek assistance elsewhere. That is not even close to “we will make sure it gets taken care of”.

Seems like they have an issue of overcommitment and underprovisioning, as this sort of error would be a symptom especially if it was a common occurrence. It could also be a firmware problem within the POS system itself. I'd assume they're connected through some backchannel network that's dedicated to vendors and probably lacks redundancy. There's probably an SLA there, but it may just promise basic connectivity and best effort functionality.
 

xdan0920

Think for yourselfer
Well, I am sure that computer would have taken accountability if it were a human. Unfortunately, it is not. And in this case, the human that was not at fault took accountability in said situation and was trying to correct it - in his effort to correct it, it seems he went to a number of actions before coming to the realization a third party support team would need to be brought it because since he was not responsible for the issue, he could not resolve it correctly. He was however HONEST that it may take longer than the guests wanted to wait and gave them an option to resolve it later.

At no point was this guest ever actually put out in any way or going to be expected to pay for this meal. Charging it to the room then making a quick call to the guest relations when they returned to the hotel to have it corrected/removed would have taken moments.

It was always a case of lets charge this to the room (which the gratuity likely would have been done anyway) and we will make sure it gets taken care of - you won't be responsible for it. Not you are paying this like it or not.

The OP likely spent more time arguing about how to handle it than it would have been to just call GR.

And quite honestly, there is no point in arguing with people who lack situational understanding.
As fun as it’s been to read your techno jargon, id like to simplify if possible.

You believe it should be the guests responsibility to...

1. Graciously double pay.
2. Return to the hotel to sort it out with the front desk.
3. Not be upset about it.
 

RustySpork

Oscar Mayer Memer
Seems legit 😉
No way Diznee could be at fault 🙄

The obvious answer is obvious. Before making a purchase at Disney World, guests should review the contracts between each vendor and WDW and send synthetic transactions across the network to make sure everything is working properly before inserting their credit cards or scanning their magic bands.
 

Register on WDWMAGIC. This sidebar will go away, and you'll see fewer ads.

Back
Top Bottom